Discover more from Frontline BeSci
Speaking with the specialist
We have seen that expert practitioners are a critical part of behaviour change: so how do we insert ourselves in their conversation?
Previously in Front Line Be Sci we looked at the importance of engaging with experts as an effective way of facilitating positive behaviour change outcomes. But how do we engage with the experts themselves? This is an important consideration as many organisations are keen to do just this – it might be to persuade them to comply more with formal guidance or regulations. But it may also be to persuade them of new research which suggests a different approach to the one they currently adopt. Hence a pharmaceutical company may have run an RCT that identifies their treatment can work well for a particular disease area. How can they hope to change the behaviour of the expert, who themselves will have a great deal of clinical experience?
How experts make decisions
To explore this we need to think about the way in which experts make their decisions which is always a combination of clinical, individual evidence alongside more general guidance. A topic which is at the heart of this is Real World Evidence (RWE), the collection of data about the effectiveness of treatments in the patient population is creating a buzz in healthcare.
RWE evaluates treatments by using data that is generated during routine clinical practice and as such sits outside of formal trials. The value of RWE is that clinical trials (known as Randomized Control Trials or RCTs) cannot always account for the entire patient population of a given disease. So, for example, this might include patients suffering from comorbidities or belonging to particular age group that did not take part in any clinical trial. In this way, RWE can compensate for the way an RCT will inevitably have limitations on the coverage and the way it controlled nature will not reflect the multi-faceted experience of the real world.
We can perhaps see this as formalising the way in which doctors have always relied on their own clinical experience to make prescribing decisions. And doctors are not alone, most expert practitioners need to match up the individual cases they see with general principles in the shape of theories, regulations, best practice guidance and so on.
But to what degree are expert practitioners guided by individual clinical observations versus the principles developed from RCTs? There is always a tension between the general and the particular for expert practitioner: as science historian Lorraine Dalston put it:
“…no universal ever fits the particulars. Never in the history of human rulemaking have we created a rule or a law that did not stub its toe against unanticipated particulars.”
If we are to engage with expert practitioners effectively then surely we need to understand better have they negotiate between particulars and universals.
The psychology of expert decision making
To explore this we explored the literature which reveals some interesting points about the way experts use evidence in their decision making.
First, and perhaps counter-intuitively, research has found that experts actually look for less rather than more information when making a decision: after all, one of the things that makes them expert is the amount of information they have already been exposed to. They have well-developed mental-schema that they bring to an issue, so when presented with new information they are well-equipped to fit the unique information about that individual into a wider rules-based understanding of the world.
Research on world-class chess masters found they can more readily spot meaningful patterns than lesser ranked chess players: they do not need to think through all the possibilities (greater breadth) or all the possible countermoves of the opponent (greater depth).
In related manner, novices tend to go straight into problem-solving mode, while experts spend more time working toward understanding the problem. Donald Schon wrote about this skills in his classic book ‘The reflective practitioner’. He makes the point that experts are not just involved in problem-solving but problem-setting. Much of the time expert practitioners are presented with a fairly vague challenge. ‘What is the diagnosis of the respiratory illness this patient is exhibiting?’ is a classic example. Experts are good at retrieving the knowledge that is relevant to a particular task, a term known as ‘conditionalized knowledge’: they understand which body of knowledge to use for which issue.
Again, we can see there is an interplay as the expert looks for the specific cues that signal the need to call on the appropriate principles: the particular symptoms, problems, behaviours and so on are presented and the practitioner then needs to determine the lens or framing that can be used to explain them. So real world evidence is always the starting point for a practitioner who then decides which mental schema (or diagnosis) they will use to understand it. This is the classic journey of the expert, going from the particular to the general and then back out to the individual again.
Related to this is the notion of ‘adaptive expertise’ the way in which effective experts approach new situations flexibly and to learn throughout their lifetimes. They are metacognitive, meaning they continually question their current levels of expertise and attempt to move beyond them. It is not simply about attempting to do the same things more efficiently but instead attempt to do things better. That is not to assume they are always right of course – the inductive nature of the expert process means conclusions are that we can only hope for ‘probably’ right.
So how does this help us to answer the original question, how do you best engage with expert practitioners? To do this effectively it seems essential to understand their ‘conditional knowledge’, the way they move between the ‘universal’ and the ‘particulars’. By doing this, we can then understand where there are challenges, at what points do the particulars of a clinical situation fail to connect very effectively to the findings from RCTs or other guidelines, regulations and so on.
It is also important to note that the experts are typically highly motivated to do things better and will be keen to evaluate whether or not the proposed new solution, information or approach will help them to do this.
To engage with experts is perhaps a little like being a great librarian: you do not need to have read all the books yourself but you need to understand the different schools of thought represented by the books and when and why people might want to reference them.
The activity around RWE is a move to formalise the way in which clinical observations can be formalised into new guidance (the encoding of ‘particulars’ into new ‘universals’). Nevertheless there will inevitably be a limitation of the degree to which clinical judgements can be formalised – even with the value generated by a formalised evaluation of treatment of RWE, this will not change the principle that no universal will ever fit the particulars.
The formalisation of RWE raises an interesting point for behavioural science which has generally drawn a firm line between testing using RCTs and evaluation, the assessment of how well the intervention (the tool used to change behaviour, often but not always communications) worked once it was launched. We can perhaps see evaluation as having a much closer relationship to testing and have a greater diagnostic role to play. To do this we can include measurement of the dimensions that we have hypothesised are responsible for the behaviour. So, if we assume that social norms are responsible for shaping the outcome we are interested in, and designed the communications (or other intervention) to deliver this, then we not only need to measure the outcome behaviour but also the extent to which social norms were evoked.
For those interested in reading more, this article is a great review of the literature on expert decision making.