Should we be more worried about fake arguments than fake news?
And how the misinformation problem may at least be partly ‘elite misinformation’
Some of my most frustrating moments have been when I feel I know my facts of a topic pretty well, but somehow, when discussing with someone who has a different perspective, I fail to make my case, even though they don’t seem to have as good a grasp of the detail.
Surely it is most of us who have been in this position where, despite feeling strongly about an issue, we are challenged by someone who seems to have a very persuasive argument as to why we are wrong! In the heat of the moment, despite knowing the facts, we struggle to find an effective response and are left with an uncomfortable feeling that we have somehow been outmanoeuvred.
This is perhaps writ large in the way that a particular style of political figure has become increasingly familiar on university campuses, presenting themself as defender of open debate. They invite passers-by to challenge them, insisting that they are simply creating space for free exchange. Naturally, this looks like democracy working well, the ‘public square’ where the best case wins. But if we look at the format more carefully, we can see that the exchanges are often not about a mutual exchange of views to arrive at the best possible answer, but are instead designed simply to win the argument at all costs.
Of course, persuasive communication is nothing new – but in an environment which is concerned with misinformation, perhaps we need to be alert not just to ‘what’ people say but ‘how’ they say it.
If it is hard to spot the techniques people are using to win arguments (regardless of how strong their case), then surely spotting a ‘fake argument’ is just as important as spotting ‘fake news’. And it has been argued that the skills to do this are often embedded in institutions rather than simply an individual characteristic.
So what does this mean for how we understand and tackle misinformation? And more widely, what lessons does this have for public debate generally?
Back to ancient Athens
To help us unpack what is going on, we first need to head to fifth-century BCE Athens, where ‘Sophist’ teachers travelled between city-states to offer instruction in the art of persuasion. These were highly educated figures, trained in the art of arguing either side of a case with fluency and precision. They, in turn, trained the young elites in how to win public debates, succeed in courts, and navigate civic life.
Philosopher Plato’s objection was not that these Sophists were necessarily wrong, but that persuasion had become detached from truth. The goal was not to discover the underlying truth to a question, but to simply secure victory. This meant that argument became a tool for dominance over one’s opponent, rather than to arrive at a mutual understanding. In some ways, then, Sophistry was not simply about cleverness but power with the tools of the trade to maintain it being the capability of arguing to win.
We might consider that things have not changed all that much, judging by the book Excellent Sheep, in which William Deresiewicz suggests that today’s elite educational institutions often place greater value on students learning to present positions persuasively than on intellectual exploration. Success, he suggests, depends on winning the argument rather than working out the best outcome, and the skills to achieve this are more likely to be found in powerful institutions.
Not just what you say but how you say it
We can see the ‘art of the argument’ as simply a form of persuasive communication; the behavioural sciences have long documented the mechanisms through which people are influenced. For example, psychologist Robert Cialdini wrote the 1984 book on persuasion and marketing, Influence: The Psychology of Persuasion, based on three ‘undercover’ years spent applying for and training at used-car dealerships, fundraising organisations, and telemarketing firms to observe real-life situations of persuasion. Using this research, he suggested that influence is based on six key principles: reciprocity, commitment and consistency, social proof, authority, liking, and scarcity. Cialdini’s principles suggest we are influenced not only by arguments but by who delivers them, how many appear to endorse them, and how costly it would be to dissent.
And a similar set of principles has been termed ‘truthiness’, referring to the feeling that something is true because it feels right. Comedian Stephen Colbert popularised the term satirically, but the underlying psychology is well-documented. Claims that are familiar, easy to process, coherent, or visually supported are more likely to be judged accurate.
But perhaps we can make the case that these are different to a ‘Sophist’ approach: Cialdini’s persuasion research identifies general mechanisms of influence, ‘truthiness’ identifies metacognitive cues that shape perceived accuracy, ‘sophistry’, by contrast, is less a psychological tendency and more a practice. We can characterise it as the intentional deployment of techniques to win an exchange, often regardless of how well the underlying claim is supported.
The distinction is important because persuasion and truthiness describe what we see as vulnerabilities or ‘deficits’ in audiences; Sophistry, on the other hand, describes strategies adopted by speakers. So this is not simply about how our minds err, but it is about how communicative environments are shaped.So what do these strategies actually involve?
The subtle mechanics of modern sophistry
Whether in ancient Athens or in the modern day, Sophistry rarely relies on outright falsehood. More often, it operates through moves that are technically defensible yet can reshape our perception of an issue.
One common mechanism is category stretching: expanding a concept in ways that increase its impact. Take the word ‘violence’: in everyday language, it usually refers to physical force. Yet in some contexts (mostly academic and activist), it is extended to include speech, exclusion, or structural harm. Within those frameworks, the move makes sense - language and institutions can cause real damage.
But broadening the term also brings the associations of physical harm into new areas, which has the effect of making disagreement feel less like debate and more like an actual injury. So while nothing has been fabricated, what counts as ‘violence’ has been changed, taking its emotional and political charge with it.
A second mechanism is anchoring through magnitude. Large numbers exert disproportionate influence, which means that when headlines announce that a policy will ‘cost £10 billion’ or ‘save 100,000 lives,’ the number quickly becomes a reference point in our minds. Even if the figure is provisional or model-based (and therefore subject to assumptions), it shapes how everything is then judged. Smaller adjustments can feel very minor next to a large anchor.
We saw this during the early stages of COVID, when high-end mortality projections circulated widely: these scenarios were built on assumptions and designed for planning. But once these large numbers entered public debate, they quickly became psychological landmarks and were difficult to replace with more realistic figures.
Closely related is catastrophic framing. Losses feel larger than gains, and negative information attracts more attention. A headline warning that a reform will ‘collapse the system’ or that a cultural trend is ‘destroying society’ travels further than one describing incremental change.
Finally, there is strategic ambiguity, phrasing that might carry broad force but has a much narrower technical meaning. For example, a government might describe a policy as ‘evidence-based,’ meaning evidence informed the decision, but this does not mean that it determined it. So whilst the phrase is defensible, it carries much more weight in the public’s mind than it merits.
Of course, any of us can deploy these moves in everyday conversation. But returning to an earlier point, their wider impact depends on status. It is those in positions of power, such as within elite universities, policy bodies or corporations, who have the reach and influence to redraw categories at scale, introduce anchors that stick, and set the tone of public debate.
When these techniques are then amplified through media and embedded in policy, they do more than win an argument. They reshape the basis on which arguments are conducted.
The sophistry to misinformation pathway
Drawing these strands together, first, we can see, therefore, how Sophistry begins to look uncomfortably close to what we call misinformation. A ‘fake argument’ which is rooted in form is not so different from ‘fake news’ rooted in content. Both can mislead. The difference lies less in whether something is technically false, and more in how it shapes understanding.
In addition, the dominant image of misinformation as fringe social media content with incorrect ‘facts’ may be misleading. Research suggests that exposure to outright false content is often limited - on that basis, it would appears that large-scale public misunderstanding cannot be explained solely by ‘fake news’ circulating at the margins. Which then raises a more unsettling possibility. If distortion frequently operates through ‘fake arguments’, then it may have a broader impact than ‘fake news’.
Consider the kind of abuse many women in public life face online. A journalist publishes an article and is met not with direct factual rebuttal, but with insinuation: perhaps screenshots are circulated out of context, her tone is dissected, old posts are resurfaced as evidence of a supposed ‘pattern.’ No single claim needs to be fabricated, instead, ordinary behaviour is reframed as proof of instability or hidden motive.
This is not fake news in the narrow sense of a demonstrably false statement, instead the narrative encourages audiences to infer guilt while preserving plausible deniability. And because the distortion lies in implication rather than inaccuracy, it can escape traditional fact-checking frameworks: this is, in our terms, a ‘fake argument.’
Second, researchers and commentators have begun to recognise that misinformation can operate within mainstream institutions themselves. This has been described as ‘elite misinformation’: technically defensible claims emerging from respected bodies that nonetheless cumulatively mislead. This is not a case of these institutions necessarily intentionally setting out to mislead (although there are famous examples of this, of course), but institutions are often structurally rewarded for winning arguments.
An example comes from economic policy, where governments frequently describe spending packages as ‘fully funded’ or ‘cost-neutral’ - much more attractive than suggesting something will cost money. However, the calculations to get to this may rely on long-run growth projections or behavioural assumptions that, whilst technically modelled, are highly reliant on a set of fairly bold assumptions. So, although the claim is not fabricated, it rests on modelling choices which may or may not come to pass – and yet the confidence of the presentation of the data can exceed the robustness of these assumptions.
Another example is from the private sector, where a company might label a product as natural’ because it contains plant-derived ingredients, and yet the manufacturing process might be highly industrial. So again, the term is legally defensible, but its everyday interpretation is much wider.
If these two points are right, that ‘fake arguments’ are a form of misinformation and that this is not confined to fringe actors but the institutions at the heart of society, then we need to carefully consider the ways in which we move the discussion forward.
What do we do about this?
Misinformation is traditionally understood primarily as a cognitive problem (people not being able to distinguish what is ‘fake’), which means the remedies typically centre on correcting citizens. We improve media literacy, teach critical thinking, debunk false claims, and inoculate against manipulation. But if distortion is at least partly institutional, then we cannot confine ourselves to correcting individual cognition.
This has at least three implications.
First, interventions should move beyond teaching people to spot false content and instead help them recognise misleading forms of argument. This requires a more sophisticated approach than the simple ‘fact-checking’ style of truth-versus-falsehood dichotomies.
Second, behaviour change efforts should account for the way in which the information is communicated - do the assertions reflect the information fairly and effectively?
Third, and perhaps most importantly, behavioural science must involve itself with institutional design. If organisations are rewarded for ‘winning arguments’ regardless of the pursuit of the best outcome, these problems remain.
That’s not to say the current remedies do not tackle this at all, but what is given prominence in the debate and is a focus of where the intervervention efforts fall may well need to better reflect these points.
Conclusions
Perhaps the most significant but most difficult task in tackling this issue is cultivating wider societal norms that value fair arguments. This means slow discussion, allowing revisiting of definitions, and embedding claims in a lived context. This may do more to counter distortion than reactive correction alone.
And this is where grass roots activity of talking to people, connecting with their concerns, countering arguments that are overstated and so on is much needed. We could perhaps see this in action in the recent UK by-election win by the Greens – a victory that some have attributed simply to talking to people. As political commentator Grace Blakely put it:
“Their campaign did not treat voters as passive recipients of a polished messaging strategy, but as participants in a shared political project. Campaigners showed up on doorsteps across the constituency to listen, as much as to talk.”
A lesson for political parties, for sure, but a wider lesson for all institutions, is that it is not sufficient to win the argument. Instead, it is the hearts and minds of people that need to be won to drive effective change.

