The psychology of misinformation and voting behaviour
While misinformation is a problem, is the bigger issue that we over estimate how susceptible other people are to it?
The world is at a momentous point with, over the next two years, around three billion people heading to the electoral polls across several economies, including Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom and the United States. Whilst this number of countries having elections in this timeframe is not without historical precedent, a combination of factors such as geopolitical tensions, the ongoing cost of living crisis and the rise of right-wing populism mean these elections are widely considered to be particularly significant.
One theme that has also regularly cited is the potential impact that misinformation might have – a World Economic Forum report cites this as the most severe global risk anticipated over the next two years, suggesting that foreign and domestic actors alike will leverage misinformation to further widen societal and political divides. The report suggests that:
Resulting unrest could range from violent protests and hate crimes to civil confrontation and terrorism. Beyond elections, perceptions of reality are likely to also become more polarized, infiltrating the public discourse on issues ranging from public health to social justice.
There are however differing opinions on this issue. One the one hand some claim that the implications of these misinformation campaigns could be profound, threatening democratic processes. On the other hand are those that consider these fears are understandable but overblown, pointing to research suggesting that while AI may result in more misinformation during elections, it will have little effect.
So what can we unpack from this – and can behavioural science throw some urgent clarity on the situation?
History of misinformation
As is well known, misinformation and disinformation (unintentional and intentional sharing of false information respectively) are not a recent phenomenon. There are documented examples as far back as Ancient Rome with Cicero spreading disinformation about Mark Antony's ethics in his speeches, influencing public opinion and political alliances during the turbulent electoral campaigns of the Roman Republic.
In 1950’s US, Senator Joseph McCarthy used lies and rumours to stoke fears of communist infiltration in the U.S. government. Whilst not directly linked to a single election, his tactics significantly influenced the political landscape, affecting voting behaviour in various elections during the era. In the UK days before the 1924 General Election in the UK, the Daily Mail published the "Zinoviev Letter," which purportedly showed a Soviet plan to support a British socialist revolution. The letter was later discredited, but it significantly impacted the election by damaging the reputation of the Labour Party.
So there is a long history of ‘dirty tricks’ around election time which existed long before current concerns about the role of social media amplifying misinformation. Nevertheless, we could argue that there is a danger AI could make misinformation more pervasive and persuasive, becoming more tailored and harder to spot. Elizabeth Seger, a researcher at the Centre for the Governance of AI, suggests that highly-personalized AI-enabled targeting could be used to carry out mass persuasion campaigns.
The case for misinformation
So just what is the evidence that misinformation can influence voting intentions and behaviour? Zoe Adams, Magda Osman and colleagues recently reviewed the influence of misinformation, locating a range of papers that indicate its influence on outcomes. From this we can see how misinformation has been used to explain the rise in far-right platforms and religious extremism which then impacts voting behaviour, disengagement in political voting and intended voting behaviour.
They suggest that one of the most significant claimed effects of misinformation is that it leads voters towards supporting policies that are counter to their own interests, for example, the 2016 US election and Brexit vote in the UK. One study they cite indicates how the reporting of the 2016 US election reduced trust in media based on false news stories associated with both political parties.
Despite this, as the authors point out, the field is a difficult one to derive tangible evidence as there is as no consensus on how to accurately measure misinformation to establish its direct effects on democratic processes (e.g., election voting, public discourse on policies).
And there are strong arguments to support the counter position that misinformation in fact has little impact on outcomes. For example, in the US many voters have often made up their minds long before election day and as such seeing misinformation probably won’t change most voting behaviour. One paper published in Nature in 2023 found:
“no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign [in 2016] and changes in attitudes, polarization, or voting behavior.”
Political scientist Andreas Jungherr suggests that people often misjudge the effects of misinformation because they overestimate how easy it is to change people’s views on issues such as voting behaviour and how effective misinformation-enabling technologies such as AI are.
We do not actually consume as much misinformation
Linked to this, despite the widespread perception that misinformation is rampant, some studies suggest that false news is only a minority of the average person’s information consumption. Indeed, news itself only makes up a small minority of our media consumption with a 2020 study finding that the average 7.5 hours of media per day consumed by those living in the US, about 14% was related to news.
Another recent study suggested that for adult Facebook users in the US, less than 7% of the content that they saw was related to news even in the months leading up to the 2020 U.S. elections. And when Americans do read news, most of it comes from credible sources. In contrast, a 2021 study found that 89% of news URLs viewed on Facebook were from credible sources.
Indeed, political scientist Brendan Nyhan suggests that fears about the spread and influence of fake news have been over-hyped, and many of the initial concerns about the scope of the problem and its effect on political outcomes are exaggerated.
If concerns about the scope of the problem and its effect on political outcomes are exaggerated then there are still other problems that can arise, one of which is the ‘liar’s dividend’. This is the way that concerns about misinformation can mean it is then easier to claim true information is false by relying on the belief that the information environment is saturated with misinformation.
Examples of this include Elon Musk and January 6th rioters who both raised questions in court about whether video evidence against them was AI-generated. On this basis, we can imagine candidates that have been caught on film saying something problematic, simply claiming the footage is fake. And, as the Brookings Institute points out, while the courts have developed complex procedures to validate evidence and reveal forgeries over hundreds of years, public opinion is another matter entirely.
Whilst it appears that the evidence for the impact of misinformation on voting outcomes is far from clear, there is still a very real concern that the so-called ‘third-person effect’ means that it has a significant impact, just not quite in the way we might imagine. This effect is the tendency to overestimate the presumed influence of harmful media on others as compared with themselves.
Research has suggested that, regardless of political identification, people are much less satisfied with democracy the more they believed misinformation influenced others relative to themselves. So even if AI-generated misinformation doesn’t actually reach or persuade people, the huge amount of media coverage it garners looks as if it might lead the public to believe that it is.
One study found that media coverage about fake news lowered trust in the media, whilst another poll found that 53% of Americans believe that misinformation spread by AI “will have an impact on who wins the upcoming 2024 U.S. presidential election”, even though there is little substantive evidence to suggest this is the case.
At the heart of this is are concerns about procedural justice. We are much happier with outcomes, even if they are not the ones we like or are counter to our interests or desires, when we consider that the processes are free, fair and just. By contrast, when we feel that the procedures used to make decisions are not fair or just or we do not have a sufficient voice in the outcome, then we are dissatisfied and lose commitment to the rules as a whole.
The implication of this for misinformation is clear: if we feel that democracy’s procedures have been manipulated due to the influence of misinformation, we are less satisfied with it.
Of course misinformation may well influence voting behaviours and intentions but the degree to which this is something very different from the dirty tricks that have been played around elections over the centuries is debateable. And the evidence that misinformation is successful at influencing people remains moot.
But there does seem a convincing case to consider how the coverage of misinformation may create concerns about the reliability of our democratic systems. This suggests a need for misinformation to be discussed in a way that does not overly exaggerate its impact on others. Media literacy campaigns and education could include guidance to help people to better assess the magnitude of threat from misinformation and to manage their anxiety about it.
Linked to this, there is a concern that all examples of protest are framed as a question of disinformation. For example, Domestic Policy Advisor to Joe Biden, Susan Rice, invoked Russian meddling in the context of the 2020 Black Lives Matter protest movement. There is a danger that misinformation becomes the explanatory vehicle for all manner of behaviour and as such misattributes the underlying causes in a way that may well be unhelpful.
Overall, we can see that the social science of misinformation in relation to voting behaviour is more nuanced than first appears. This calls for a more balanced and nuanced view of the problem – and a need to look at the publics concerns and beliefs about misinformation as much (if not more than) the direct impact of misinformation on voting behaviour.
Get a regular dose alternative facts direct to your inbox