Should we be more worried about fake arguments than fake news?
And how the misinformation problem may at least be partly ‘elite misinformation’
Many of us have surely been in the position where, despite feeling strongly about an issue, we are challenged by someone else who seems to have a very persuasive argument as to why we are wrong. In the heat of the moment, we struggle to find an effective response and are left with an uncomfortable feeling that we have somehow been outmanoeuvred.
This is perhaps writ large in the way that a particular style of political figure has become increasingly familiar, presenting themselves as defenders of open debate. They appear on university campuses, inviting passers-by to challenge them, insisting that they are simply creating space for free exchange. Naturally, this looks like democracy working well, the public square where the best case wins.
But if we look at the format a little more carefully, we can see that the exchanges are not really about mutual clarification to arrive at the best possible answer, but rather are designed to simply win the argument at all costs.
Of course, persuasive communication is nothing new – but in an environment which is concerned with misinformation, do we need to alert not just to ‘what’ people say but ‘how’ they say it? If it is hard to spot the techniques people are using to win arguments (regardless of how strong their case it), then surely spotting a ‘fake argument’ is just as important as spotting ‘fake news’.
Back to ancient Athens
To help us to unpick what is going on, we need to head to fifth-century BCE Athens. Here ‘Sophist’ teachers travelled between city-states offering instruction in the art of persuasion. These were highly educated, sophisticated figures, trained in the art of arguing either side of a case with fluency and precision. They trained the young elites in how to win public debates, succeed in courts, and navigate civic life.
Philosopher Plato’s objection was not that these Sophists were necessarily wrong, but that persuasion had become detached from truth. The goal was not to discover, but to secure victory, which meant that argument became a tool for dominance rather than understanding.
In a sense then, sophistry was not simply about cleverness but status, authority, and performance in public space, rewarding those comfortable with adversarial exchange. And arguably, things have not changed as much as we might think in the subsequent centuries: in his book, Excellent Sheep, William Deresiewicz reflects on how today’s elite educational institutions can place a high value on students learning how to present positions persuasively. In competitive environments, the ability to defend a stance convincingly can take precedence over intellectual exploration. Success often depends on demonstrating command rather than exploring the topic.
Not just what you say but how you say it
Of course, persuasive communication is nothing new, and the behavioural literature has been key in this.
For example, psychologist Robert Cialdini wrote the 1984 book on persuasion and marketing, Influence: The Psychology of Persuasion, based on three “undercover” years applying for and training at used car dealerships, fundraising organisations, and telemarketing firms to observe real-life situations of persuasion. He proposed that influence is based on six key principles: reciprocity, commitment and consistency, social proof, authority, liking, and scarcity. Cialdini’s principles suggest we are influenced not only by arguments but by who delivers them, how many appear to endorse them, and how costly it would be to dissent.
And a similar set of principles has been given the term ‘truthiness’, referring to the feeling that something is true because it feels right. Stephen Colbert popularised the term satirically, but the underlying psychology is well-documented. Claims that are familiar, easy to process, coherent, or visually supported are more likely to be judged accurate. Fluency becomes a cue for truth. When a statement flows smoothly, when it rhymes, when it is repeated, when it is paired with an image, it acquires a subtle epistemic glow.
While Cialdini’s persuasion research identifies general mechanisms of influence and truthiness identifies metacognitive cues that shape perceived accuracy, ‘sophistry’, by contrast, is less a psychological tendency and more a practice. We can characterise it as the intentional, strategic deployment of rhetorical form to win an exchange, often regardless of whether the underlying claim is well supported. And this, perhaps, means it is deserving of particular focus.
The subtle mechanics of modern sophistry
Whether in ancient Athens or in the modern day, sophistry rarely relies on outright falsehood. More often, it operates through moves that are technically defensible yet can reshape our perception of an issue. Let’s briefly examine some of the main principles:
One common mechanism is category stretching: expanding a concept in ways that increase its impact. Take the word ‘violence’: in everyday language, it usually refers to physical force. Yet in some (mostly academic and activist) contexts, it is extended to include speech, exclusion, or structural harm. Within those frameworks, the move is coherent - language and institutions can cause real damage.
But broadening the term also imports the associations that come with physical harm into new domains, meaning eans that disagreement begins to feel less like debate and more like injury. So while nothing has been fabricated, what counts as ‘violence’ has been changed, and it moves, the emotional and political stakes move with it.
A second mechanism is anchoring through magnitude. Large numbers exert disproportionate influence, which means that when headlines announce that a policy will ‘cost £10 billion’ or ‘save 100,000 lives,’ the number becomes a reference point. Even if the figure is provisional or model-based, it shapes how everything that follows is judged. Smaller adjustments feel minor next to a large anchor.
We saw this during the early stages of COVID-19, when high-end mortality projections circulated widely: these scenarios were built on assumptions and designed for planning. But once large numbers entered public debate, they became psychological landmarks – as such, nuance competed with scale.
Closely related is catastrophic framing. Losses feel larger than gains, and negative information attracts more attention. A headline warning that a reform will ‘collapse the system’ or that a cultural trend is ‘destroying society’ travels further than one describing incremental change.
Finally, there is strategic ambiguity, phrasing that might carry broad force but has a much narrower technical meaning. A government might describe a policy as “evidence-based,” meaning evidence informed the decision, but this does not mean that it determined it. A company might say a product is “natural,” but this refers to its ingredients rather than its processing. Whilst each phrase is defensible, they are much more significant in the public’s mind than in technical detail.
Of course, any of us can deploy these rhetorical moves in everyday conversation. But their wider impact depends on status. It is those in positions of institutional authority, such as within elite universities, policy bodies or corporations who can redraw categories at scale, introduce anchors that stick, and set the tone of public debate.
When these techniques are amplified through media and embedded in policy, they do more than win an argument. They reshape the terrain on which arguments are conducted.
The sophistry to misinformation pathway
On this basis, sophistry begins to look uncomfortably close to what we call misinformation. A “fake argument” which is rooted in form is not so all that different from “fake news” rooted in content. Both can mislead. The difference lies less in whether something is technically false, and more in how it shapes understanding.
In addition, the dominant image of misinformation as fringe conspiracy or viral social media falsehood may be misleading. Exposure to outright false and inflammatory content is often limited and concentrated among a relatively small segment of users. This would suggest that large-scale public misunderstanding cannot be explained solely by fake news circulating at the margins.
Which then raises a more unsettling possibility. If distortion frequently operates through framing, scaling, and emphasis rather than fabrication, then ‘fake arguments’ may in some contexts have a broader impact than fake news. And increasingly, scholars and commentators have begun to recognise that misinformation can operate within mainstream institutions themselves. This has been described as ‘elite misinformation’: technically defensible claims emerging from respected bodies that nonetheless cumulatively mislead.
This is not a case of these institutions necessarily intentionally setting out to mislead (although there are famous examples of this, of course), but institutions are often structurally rewarded for clarity over complexity, urgency over calibration, and confidence over uncertainty.
When issues are communicated in a way that introduces large anchors, stretches categories, and ambiguity are not necessarily deceiving. They are often operating within systems that prize decisiveness, media traction, and moral seriousness. In such environments, rhetorical intensity is adaptive behaviour.
For example, in economic policy, governments frequently describe spending packages as ‘fully funded’ or ‘cost-neutral,’ relying on long-run growth projections or behavioural assumptions that might be technically modelled but are highly contingent on a set of fairly bold assumptions. So, whilst the claim is not fabricated, it rests on modelling choices which may or may not come to pass – and yet the confidence of presentation can exceed the robustness of these assumptions. And in the private sector, a company might label a product as natural’ because it contains plant-derived ingredients, and yet the manufacturing process might be highly industrial. So again, the term is legally defensible, but its everyday interpretation is much wider.
The problem, therefore, may not sit only at the fringes. It may also sit at the centre.
What do we do about this?
If misinformation is understood primarily as a problem of cognitive weakness, then the response naturally centres on correcting citizens. We improve media literacy, teach critical thinking, debunk false claims, and inoculate against manipulation. The implicit model is deficit-based: individuals are misled because they lack the skills to distinguish truth from falsehood.
But if distortion is partly institutional, then behavioural science cannot confine itself to correcting individual cognition. It must also examine how communicative environments are designed.
This has at least three implications.
· First, interventions should move beyond teaching people to spot false content and instead help them recognise misleading rhetorical form. This requires a more sophisticated model of persuasion than simple ‘fact checking’ approach of truth-versus-falsehood dichotomies.
· Second, behaviour change efforts should account for proportionality. Does the way in which the information is communicated reflect the information fairly and effectively?
· Third, and perhaps most importantly, behavioural science must involve itself with institutional design. If organisations are rewarded for salience and punished for uncertainty, then these problems remain embedded. Changing behaviour may therefore depend as much on altering institutional incentives as on altering individual cognition.
Of course, there is a danger that pointing to institutional distortion can be weaponised by those seeking to undermine trust altogether. But acknowledging structural incentives is not the same as outright cynicism, rather an argument for stronger standards so that trust is maintained through discipline rather than authority alone.
Conclusions
Perhaps the most significant but most difficult task in tackling this issue is cultivating wider societal norms that value proportion, uncertainty, and sustained engagement. Formats that slow discussion, allow revisiting of definitions, and embed claims in a lived context may do more to counter distortion than reactive correction alone.
And this is where grass roots activity of talking to people, connecting with their concerns, countering arguments that are overstated and so on is much needed. We could see this in action in the recent win in the UK by-election of the Greens – a victory that some have attributed to simply talking to people. As political commentator Grace Blakely put it:
“Their campaign did not treat voters as passive recipients of a polished messaging strategy, but as participants in a shared political project. Campaigners showed up on doorsteps across the constituency to listen, as much as to talk.”
A lesson for political parties for sure, but a wider lesson for all institutions that perhaps, while in the era of persuasive communication at scale, it is no longer sufficient to win the hearts and minds of people and drive effective change.

