When fake feels true
Deepfakes and the behavioural reconstruction of reality
There has been a sharp increase in political deepfakes: AI-made images and videos that often mix politics with eye-catching visuals and money-making content. These include AI-generated ‘ordinary’ people placed into political scenes, as well as manipulated depictions of public figures.
At one level, this development appears to confirm a familiar concern. If synthetic media becomes sufficiently realistic, we may struggle to distinguish fact from fake – and if this is the case, then we perhaps will be persuaded by arguments based on malicious and misleading information.
However, with a behavioural lens, we can also make the case for a more complex dynamic. As media academic Daniel Schiff set out, this sort of material can have an impact even when it is recognised to be fake; it simply ‘feels true’. Which raises a different set of questions. Rather than asking whether individuals can detect fakes, perhaps we should instead consider why synthetic content becomes meaningful for people, and how it might influence behaviour, even in the absence of people literally believing it.
From epistemic judgement to affective alignment
Conventional explanations of misinformation typically assume that we evaluate content primarily in terms of accuracy. On this basis, the problem is one of misclassification: false information is mistakenly treated as true.
But deepfakes complicate this notion as engagement with synthetic content does not depend solely, or perhaps even primarily, on their perceived accuracy or authenticity. Instead, deepfakes appear to function through affective and identity-based alignment: they resonate with existing beliefs, our personal ethics, or group identity, and, in this way, acquire a legitimacy that is not about factual correctness. This is consistent with research on motivated reasoning and identity-protective cognition, where we selectively accept information that is congruent with what we already believe.
In some way then deepfakes are reflecting our underlying beliefs, attitudes, sympathies, and emotions, making them visible. The codes, signals and symbols that normally operate in the background are brought to the surface in a very salient way. Surely this means we can see more clearly the mechanism for the way authority is generated, how identity is signalled, how meaning is assembled.
Synthetic media as expressive infrastructure
If we see deepfakes in this way, then perhaps we can see that they are not isolated pieces of deceptive content but are part of a wider expressive ecosystem. They share similarities with memes, satire and other symbolic forms that circulate within digital culture. For all of these, their role is not primarily about evidence (fake or otherwise), but communicative: all of them translate often complex beliefs and narratives into condensed, legible formats that can be rapidly interpreted and circulated. This reinforces how they operate less as claims to be verified and more as signals to be recognised.
A useful parallel is the political cartoon. During the early 19th century, British caricaturist James Gillray repeatedly depicted Napoleon Bonaparte as a tiny, petulant figure, most famously in “Maniac Ravings, or Little Boney in a Strong Fit” (1803), where he is shown thrashing wildly in a padded cell. The image was not intended to be an accurate representation, but to condense a broader political narrative: that Napoleon was unstable, dangerous, and illegitimate.
A similar approach appears in wartime propaganda where the British “Careless Talk Costs Lives” posters of World War II did not document real events but pictured exaggerated scenarios such as eavesdropping enemies lurking behind everyday conversations. These were designed to make abstract risks visible and memorable.
Going even further back, medieval religious imagery functioned in much the same way. Relics such as fragments of the “True Cross” or illustrated icons of saints were not always verified as ‘real’ in any modern sense, but they were treated as meaningful because they made belief tangible and present.
Deepfakes follow this same logic but with an important difference: they make use of the visual grammar of reality itself. The example of AI-generated personas makes this clear. Images of fictional women in a military context circulate widely online, despite the way that details often do not hold up to scrutiny – for example, uniforms are incorrect and the scenarios are implausible.
But clearly what matters is not plausibility, but composition, as the images are a tight cluster of cues based around sexuality, authority and nationalism all in a single, highly understandable frame.
Non-binary belief and the persistence of influence
It feels at times that a key policy concern about deepfakes is that belief in their message is binary: in other words, we either accept or reject a claim made by the deepfake which means that exposing content as false should therefore reduce its impact.
However, there is plenty of evidence to suggest that belief is less a binary true-false and more often provisional: we might both be dubious about the claim in a deepfake but at the same time still finding it meaningful or worth sharing.
This aligns with what philosopher Daniel Dennett describes as “belief in belief”, as well as sociological accounts of how individuals maintain multiple, sometimes contradictory, commitments. These positions suggest that belief is often less about arriving at a settled judgement of truth and more about maintaining a workable orientation to the world. People may hold onto ideas not because they are fully convinced, but because those ideas feel right, are socially reinforced, or simply support a broader worldview they are invested in.
In practice, this means we have a looser relationship to the notion of truth that might be assumed. We rarely stop to resolve whether something is definitively real or fake, but instead, we move through the world holding things lightly.
Implications for intervention
Of course, these dynamics challenge much of the received wisdom in behavioural science, given that it focuses mainly on detection, labelling or debunking. Because if content spreads through alignment, identity and meaning, then correcting factual inaccuracies will have only a limited behavioural impact.
This echoes broader debates within behavioural science about the limits of information-based interventions. Simply providing better facts or clearer signals of accuracy assumes that people are primarily motivated by truth-seeking. The evidence here suggests something more complex.
A more realistic approach would be to instead engage with the conditions under which content becomes meaningful and shareable. In practice, this begins to look quite different:
Working with narratives, not just facts: Rather than only correcting false claims, interventions could focus more on offering alternative narratives that are equally compelling. We see this in areas like vaccine uptake, where campaigns that centre on personal stories and lived experience often outperform purely statistical messaging. The same logic could apply to synthetic media: instead of simply flagging a deepfake as false, an intervention might counter it with a more compelling narrative that reframes the issue in human terms. The competition is not just over accuracy, but over which story travels.
Targeting social contexts and networks: If content spreads through social affiliation, then interventions need to work through those same channels. Research shows that targeting influential individuals within networks can shift norms and behaviours at scale. In practice, this might mean working with creators, moderators or highly connected users within specific communities who can model alternative ways of engaging with content.
Designing for friction and reflection: Prompts that encourage users to pause and consider the accuracy of content before sharing have been shown to reduce the spread of misinformation. During election cycles, some platforms have introduced warnings or prompts before resharing political content. With deepfakes, similar approaches could involve surfacing information about how an image was generated or prompting users to reflect on its source. The goal is not perfect judgment but interrupting automatic amplification.
Building alternative repertoires of participation: Finally, if deepfakes serve as a means of participation, interventions could offer alternative ways for people to participate. Rather than simply discouraging engagement, this might involve helping people to create formats that allow users to express identity, humour or political stance without relying on misleading or synthetic content. This could include counter-memes, participatory campaigns, or community-led content that reshapes norms from within. And, as behavioural economist Cass Sunstein suggests, participation in online political behaviour is often driven as much by opportunities for expression and affiliation as by information.
Taken together, these interventions can shift the emphasis from correcting information to shaping the environments and sensemaking frameworks within which information is encountered. The task is not simply to help people distinguish true from false, but to understand how content becomes meaningful, how it travels, and how those dynamics can be redirected.
Conclusions
Our engagement with information is never purely evidence-based. Research has consistently shown that the way people interpret and act on information is shaped by identity, social norms, heuristics and context, not just accuracy alone. But perhaps what synthetic media does is expose this more clearly – we can see more clearly how people use information, not just to understand the world, but to navigate it, to position themselves within it, and to connect with others.
In doing so, deepfakes challenge models that treat truth as a straightforward input to behaviour, in which better information leads to better decisions. Instead, they point to a more ‘situated’ view of behaviour, in which meaning is constructed collectively and perception, interpretation, and action are bundled together.
For behavioural science, this means the challenge is more demanding. The task is not limited to understanding how individuals process information or how biases distort judgment. Instead, it is about understanding how our shared realities are assembled through social interaction, how they are stabilised within networks, and how they become actionable through shared narratives and symbols, such as deepfakes. In a sense, then, deepfakes are not simply a problem to be corrected or contained; they are a signal of a wider changing knowledge environment.


