Faking it?
It sometimes appears as if we are inevitably going to being duped by fakes: but perhaps the distinction between authentic and fake is not always quite so simple
AI engineer, Blake Lemoine, grabbed the headlines with his claims that an AI is sentient. “I know a person when I talk to it,” Lemoine told the Washington Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
To propose that an AI has sentience feels a long way from what is right now technically possible. So why, we might ask, is an AI engineer claiming this? Has Lemoine been duped? And how can any of us disentangle what might be a real person and what is an imitation, a fake?
Another area where we could argue there is controversy concerning what is real or fake is in the ‘predictive powers’ of AI. For example, there is a suggestion that AI can predict the outcome of football’s next world cup. But how authentic can this claim be given the unique conditions in which the world cup will be played (e.g., the timing and climate of the tournament, and the portfolio of players taking part)?
And a wide range of other attributes associated with AI are surely open to challenge. For example a machine known as "Botto" has already made in more than €1 million from its first four NFT artworks at auction and an ‘AI Influencer’ Miquela Sousa is featured in product endorsements for streetwear and luxury brands such as Calvin Klein and Prada. Across these cases, do we really think that machines have sentience, powers of prediction, artistic skills, and fashion sensibilities?
The degree to which we consider something real or fake is an issue that can feel pressing, as digital technology seems to make it easier than ever to create convincing fakes. Take ‘deepfakes’ the result of machine-learning systems where AI ingests video, photographs, or audio of a person or object, learns to copy its behaviour and then outputs the results onto another target person, creating an uncannily precise counterfeit. There are many concerns about the damage that can be created by deepfakes, but also by impact of mis-, or dis-information (‘fake news’) with frequent suggestions that this can have huge societal implications.
Arguably the same considerations seem to be related to the way financial ‘bubbles’ are created: for example, Nassim Nicholas Taleb alikens Bitcoin to the 17th century bubble where the price of tulip bulbs skyrocketed before crashing. This reflects the idea that speculation can be based on something that is ultimately fake and we are all collectively duped.
Why we might succumb to ‘fake’
So how can behavioural science help us here, to understand how we apparently seem to succumb to fakes. There are a range of ways in which we can start to unpack this broad and complex field with some initial thinking set out below that shows perhaps this is less to do with some kind of fault on our part and more the inevitable outcome of navigating complex environments:
Directed attention: Of course, anything that is fake often operates in a context where people are suggesting that it is anything but. Widespread commentary about technology can often imply sentience: the humanizing of robots is widespread, with language used that subtly implies they have agency, human capabilities and sensitivities, or even overtly modelling AI against human brains (i.e., nature of ‘AI learning’ or Artificial Neural Networks, are inspired by human counterparts) . So it is not necessarily a failure of our ability to spot fakes. As is the case with not spotting a gorilla when we are counting basketball passes, we are not necessarily looking for fakes if our attention is guided to look at the world in a different way.
Lack of comparisons: Tech advocate and critic Jaron Lanier suggests it can be hard to evaluate the effectiveness of AI. Of course, AI results look impressive but on the other hand, it is hard to know how well it performs versus other much more straightforward methods. We could, for example, show people the best-selling books in the category that they have recently made a purchase. Would this perform just as well? The answer is that we do not know because there is no baseline for us to compare this to. They may be better but much of the time we simply do not know – which means it can be hard to evaluate just how good AI is versus other methods.
Reliance on others: We do not live in a world in which we have the luxury of first order evidence: we do not have the time or capability to ourselves check absolutely everything. We necessarily need to rely on others, and will trust certain sources or use certain cues to inform us about what we can rely on (or when we need to be more suspicious). Knowing what these cues are is important – for example, appearing to have a degree of scientific authority or coming with a set of precise numbers (as is the case in the world cup predictions) it is reasonable that we are then less suspicious about the authenticity of claims.
Challenging orthodox beliefs: Blake Lemoine identifies himself in the Washington Post article as a mystic Christian priest, suggesting he is sympathetic to a belief system that questions orthodox explanations about life. The mystical aspect of this reflects a knowledge which is often considered to be ‘rejected’, ‘supressed’ and ‘stigmatized’ by those in ‘authority’. As we have set out previously, we are often motivated when we find ourselves in an underdog position, wanting to question the authority of orthodox, dominant positions. Of course, this can also apply to non-mystical beliefs just as easily: take the way in which crypto-currencies’ challenger / underdog status could well have been a key ingredient for the widespread enthusiasm that we saw.
None of these suggest a deficit in people, more that the task of establishing what is truth and what is fiction often requires a fair amount of work (and trust) - so we will inevitably sometimes get it wrong.
Unpacking suggestibility
More broadly, we can see that humans are suggestible beings, a finding that was viewed by early psychologists such as William McDougal as an indicator of our vulnerability to manipulation. Suggestible minds, in this prism, are overly influenced by what other people say. But paradoxically, this capacity for suggestibility is also something to be celebrated as it is what makes learning, emotions, socialization, and social cohesion possible.
Growing up, and as we learn, we take our cues from each other; through such socialization the individual then emerges, capable of rationally checking the evidence supporting the propositions they entertain. This is the paradox of being human: if we avoid any kind of susceptibility to the possibility of becoming deluded or duped then it is very hard to live any kind of meaningful life.
What is a fake anyway?
Moving the thinking even further, Patricia Kingori suggests the difference between real and fake is not as precise as we might assume. For example, how do we untangle the notions of what is genuine and what is fake when we can synthetically produce for £100 a diamond with the same carat, clarity and chemical composition as a natural diamond that would ordinarily sell for £10,000?
Kingori also points out that the notion of fakes and authenticity is culturally specific, pointing out that in some Southeast Asian cultures they simply have no word for fake:
“Something is either a good copy or a bad one. And there is no ‘one’ – no concept of the pure, unattenuated original.”
In in parallel way, we can also challenge the degree to which people are actually duped. People often disbelieve the misinformation when presented to them. Belief is not necessarily a binary thing, we can hold it in different shades and degree.
The politics of what is real
There will always be a battle for what is considered ‘truth’, ‘genuine’, ‘real’ and what is a ‘mistruth’, ‘artificial’ and ‘fake’. Look at the narratives over the current rail strikes in the UK: Holly Smith points out that the use of language such as “union barons” that are “behind the strike” provide an inaccurate characterisation of the way strikes work in practice. But clearly many use such terms, reflecting their beliefs about the way that trades unions really operate.
In a similar way, as we have set out previously, the media can frame climate change in a way that accepts its existence, but justifies inaction or inadequate efforts to tackle it.
Determining where the boundaries sit between what is real and what is fake is contested and the drawing of the boundary line is often a highly political act.
In conclusion
We have discussed the ways that humans can perfectly reasonably conclude that something is authentic when many others consider it is fake. That we have to rely on others to guide us through life and we need to be open and trusting if we are to operate in society, so we are inevitably vulnerable to being duped. But the notion that technology means this is a more pressing issue than at any other point in history, being open to challenge. The panic we are witnessing today about the impact of technology to drive fakes arguably mirrors the reaction to the arrival of telegraphy in the early 19th century. The point is that these issues have always been with us, but perhaps the forms they take are changing.
More widely, we can see that untangling what is real and what is fake is not always straightforward. The lines are often blurry and political with humans often able, it seems, to hold quite a sophisticated position straddling some pretty nuanced lines. People often disbelieve the misinformation when presented to them. Belief is not necessarily a binary thing, we can hold it in different shades and degree.
Perhaps, as we collectively see these issues more clearly, there is a sense of epistemic uncertainty as we feel the loss of what may have seemed to have been a real, knowable world in the past. This could be the price we pay for living in a more pluralistic environment where we can better understand that facts, truth and authenticity are in fact legitimately contested.