When duty calls
How new guidance for firms on their duty to customers is highlighting a widespread debate about human bias, nudges and sludges
To what extent is human bias a source of negative outcomes for people when making financial decisions. A new set of rules, Consumer Duty, developed by the UK financial regulator, the Financial Conduct Authority (FCA) considers that these are significant and need to be addressed.
This represents, in the words of the FCA, a ‘paradigm shift’ in its expectations of firms with a new Consumer Principle, requiring firms ‘to act to deliver good outcomes for retail customers’. This is designed to equip consumers to make effective decisions that are in their interests, which in turn sets a requirement for financial services firms to find ways to identify their presence in their processes and then take steps to mitigate their influence.
While this seems a positive ambition, some questions come to our minds of the difficulties that firms may face in managing this effectively. To explore this we reflect on wider lively conversations taking place about human bias and nudge.
Value of behavioural interventions
It is worth setting out the way that behavioural interventions are often considered to be a positive tool in financial services. These often take the form of nudges, that is interventions that take human failings as a starting point (e.g., inertia, loss aversion and unthinking conformity). While nudges do not reflect the full range of behavioural interventions that could be deployed, they do receive a great deal of attention in practice.
Nudge-type interventions can be broadly classified into three categories, each referencing a different human failing that leads people to reference automatic rather than systematic information-processing strategies and decision making. These are:
Nudges that tap into people’s propensity to choose options that demand the least (physical or intellectual) effort (i.e., the path of least resistance). These nudges focus on reducing choices , making information more relevant and easily understood, or helping focus attention on the need to make a choice.
Nudges that tap into people’s propensity to conform or succumb to prevailing group norms and peer pressure. These nudges provide relevant social references influence the adoption of the behaviour.
Nudges that harness people’s eagerness to align with identities that provide ways to maintain and build positive self-esteem. These nudges reframe behaviours in ways that appeals to salient identities.
An example of what is often considered the most successful behavioural finance intervention is the introduction of auto-enrolment for retirement savings. By requiring employers to automatically enrol employees into a pension plan – albeit with the choice for the employee to opt out – participation rates in retirement savings have soared from 47% to nearly 80%.
Another example is the FCA guidance on behavioural interventions to encourage UK banking customers to understand the costs of their bank accounts so they act to reduce their fees by switching. They found that annual summaries had no effect on consumer behaviour in terms of incurring overdraft charges, altering balance levels or switching to other current account providers. In contrast, their research showed that signing up to text alerts or mobile banking apps reduces the amount of unarranged overdraft charges incurred by 5% to 8%. Signing up to both services had an additional effect, resulting in a total reduction of 24%.
From nudges to sludges
While much of the time nudges can be used to drive positive outcomes, there is also the potential for a flip-side: ‘sludges’ (also known as dark nudges or poor design patterns) can cause people to act in ways that are not in their best interest. This term was created by the co-writer of ‘Nudge’, Richard Thaler:
Sunstein and I stressed that the goal of a conscientious choice architect is to help people make better choices “as judged by themselves.” But what about activities that are essentially nudging for evil? This “sludge” just mucks things up and makes wise decision-making and prosocial activity more difficult.
It is these sludges that (on our reading at least) the FCA seem keen for firms to address. And while this seems entirely legitimate and reasonable there are some considerations here that may not make it as easy as first appears.
Challenge 1: Just how much impact can a nudge (or indeed a sludge) have?
There is a very lively argument underway about the value of nudge with the publication of a recent paper which suggests that there is ‘no evidence for nudging’. This seems at odds with the FCA paper which has highlighted behavioural biases as a concern (on the basis that these can be leveraged via sludges for poor consumer outcomes).
It is worth looking at the backdrop of this argument which starts with a paper by Stephanie Mertens of the University of Geneva. Mertens and colleagues had conducted a meta-analysis of past research and found that ‘choice architecture interventions successfully promote behaviour change across key behavioural domains, populations, and locations.’
However, critics suggested that some of the papers included in the meta-analysis used problematic data and needed to be excluded. Another criticisms also suggested that although the papers considered p-values (an indication of how likely an observed difference is by chance alone) and effect sizes (how big – and therefore how relevant the effect is), the authors had failed to fully account for ‘publication bias’. This is the selecting of results that show a positive finding for nudge, so that studies which find nudges don’t work are excluded or not even published. Of course it is well known that scientific journals typically select the more interesting papers that show an experiment working.
So a new study re-examined the estimated impact of publication bias in the original paper. These authors estimated the degree of publication bias and its impact on the effect size. They reported that the original effect size of all 212 studies was in fact zero. Clearly contradicts Mertens et al.’s suggestion that ‘choice architecture [nudging] is an effective and widely applicable behavior change tool.’
As we might anticipate, this has spawned much discussion, some of which is challenging the interpretation of the findings. From our perspective, the debate could perhaps move on: we would simply highlight that human bias is a narrow focus on which to develop interventions and instead we need to look at human behaviour more broadly. For example, as we have cited previously, Anne Custers examined attempts to ‘debias’ indebted participants by systematically redirecting their attention to the (accessible and understandable) information included in their credit card statement. This reflects the broadly held assumption in consumer behaviour theory that inaction in debt management is a function of cognitive failure to direct attention appropriately.
However, only 20% of participants reported they had received a credit card statement, whereas by law, all of them had would have in fact received them. She suggests that this remark is in fact very significant in the light of the inaction of debt management problem: quite likely many consumers simply do not read their mail. The notion of a cognitive deficit related to inefficient directing of attention in the credit card statement is therefore much less valid with this insight.
What are the implications of this for the Customer Duty discussion? If it is hard to prove that a nudge can have a positive outcome, then it is surely not equally as hard to prove that they have negative outcomes? With that in mind, then identification of ‘sludges’, in order to rectify them (and ensure that they do not exploit customers’ behavioural biases), does not look to be an easy task. But perhaps more broadly, (to Custers’ point) human bias looks like a narrow view of our cognition and don’t reflect the full capacity of human behaviour.
Challenge 2: Is bias the right lens for Customer Duty?
To what extent can the successful interventions of defaults or alerts via text messages and mobile banking apps actually be considered to be nudges that correct ‘behavioural biases’. Recall that nudges are best conceived as choice-structure interventions designed to increase the chances of the ‘nudgee’ choosing the preferred option unthinkingly.
We would argue that, in both cases, behaviour change could in fact be considered the result of well understood insights from the psychology literature. For example, as Frank Mols et al point out, defaults involve appeals that made people aware, first, of socially normative and acceptable behaviour but also that they are under the surveillance of an authority. Second with reminders / messaging it is hard to see how these are anything other than well understood interventions from psychology literature that guidance is timely and relevant.
The focus on bias has an implied focus on customers as faulty, which clearly is a useful antidote to an unrealistic expectation of human behaviour (assuming perfect resources, access, use of info and so on). But nevertheless there is a danger that a ‘bias-bias’ creates an ‘us’ vs ‘them’ perspective, with faults in customers being the focus, rather than the way that people can in fact frequently be relied on to deliver positive outcomes and where there are biases, they may well be highly effective ones.
What are the implications of this for the Customer Duty? While this might at first glance seem like nit-picking, the reality of applying guidance at scale means it is important to be able to properly define the phenomenon if it is then feasible to detect, measure and resolve. Behavioural biases have long been a slippery phenomenon for practitioners with much relying on the ‘expertise’ of the practitioner to spot them and then find ways to either capitalise on them with nudges or to mitigate their negative effects. If there is a regulatory requirement for firms to tackle them then surely the burden of evidence needed for this process will be much higher.
Challenge 3: When is a nudge a sludge or a sludge a nudge?
Identifying positive outcomes is a notoriously difficult task – what might at first appear a no-brainer good thing can turn out to have unexpected consequences. As we have set out previously, a study from 2017, by researchers from Harvard and Yale Universities and the National Bureau of Economic Research (in collaboration with the United States Military Academy), suggests that the impact of auto-enrolment is not necessarily entirely positive. The authors looked at the impact of a US Army programme to automatically enrol newly hired civilians into their ‘Thrift Savings Plan’ (at a default rate of 3% of income). They found that, four years after hire, people automatically enrolled in the Thrift Savings Plan were borrowing more money for car loans (increased by 2% of income) and first mortgage balances (increased by 7.4% of income).
The implications for Customer Duty are that it is not always straightforward to determine when a feature of the ‘decision architecture’ leads to positive or negative outcomes.
The nudge debate will continue to run with science doing it’s work to establish what works when and why: indeed there is an emerging body of evidence that combinations of nudges and other approaches work well together. As we have set out though, there is a wider conversation to be had about the degree of focus on bias mitigation as means of delivering positive outcomes.
The Customer Duty requirement comes from a positive place, but if this guidance is to be properly understood and used by organisations (let alone the myriad of consulting firms that will be eagerly seizing on this) then the implications need to be properly thought through.
As regulators along with well as a range of other public and private sector bodies take steps to manage the potential for harms, then the degree to which ‘bias’ is considered a source of harm and what is done to manage this, has much wider implications for practitioners.
Want to make sure you keep up to date with applied behavioural science? Sign up for your regular post direct to your inbox: