Discover more from Frontline BeSci
Risk perception & AI: How to enable a public discourse
Why we need to understand the psychological and sociological factors that impact and shape people’s perception of risks concerning AI
It is possible that artificial intelligence (AI) is one of the most transformative technologies of our time. Bill Gates even called it as revolutionary as mobile phones and the internet. And as things stand, AI has the potential to revolutionize domains from healthcare and education to entertainment and security.
However, it also poses significant risks and challenges, such as ethical dilemmas, social impacts all the way to existential threats. Indeed, taking account of these risks, a group of prominent scientists and entrepreneurs, including Elon Musk, signed an open letter calling for a 6-month break on the development of AI (specifically, the development of LLMs such as ChatGPT) until its safety and social implications are fully understood and addressed. And more recently, Sam Altman addressed the US congress, speaking out in favour of AI regulation.
However, whilst a number of industry leading individuals are bringing awareness to risks that come with the recent developments, decisions involving AI do not currently involve the general public as part of discourses.
To better understand how to enable such a discourse, we explore what behavioural science tells us about how people perceive the risks and benefits of AI. What are the psychological and contextual factors that influence judgments and might currently mean people dismiss some of the risks? And importantly, what can we do to foster a better public discourse?
Why should we care about public perception when it comes to AI?
To answer this question, it is worth going back to an early paper by leading risk psychologist, Paul Slovic. He argues that risks are neither neutral nor objective, but political: they involve values, interests, and power. Risk assessment, the process of identifying and quantifying hazards, is often influenced by the assumptions, biases, and preferences of experts and stakeholders. Risk management, the process of deciding how to deal with hazards, is often influenced by the preferences, pressures, and agendas of decision-makers and interest groups. And ultimately, politics, influenced by risk perception, values, power amongst others, influence the relationship between risk assessment and risk management.
Those who pay attention to recent developments in AI will note that the speed at which progress is being made right now is remarkable, and it feels almost as if a lot of things are currently happening ‘to’ us.
But in reality societies should be actively engaged in decision making about this rapidly changing technology. This includes decisions about:
how to handle the impact on labour-markets
whether one accepts the potential growth that might come with AI (which could dramatically worsen any climate change outlook)
safety (Italy has previously just banned ChatGPT due to privacy concerns)
how to ensure equalities are not worsened, decisions about how to include AI as part of our daily lives
But more broadly, there need to be decisions about the acceptability of the inevitable risks that come with AI.
And risks there are plenty. In the short term, mental health problems may be exacerbated (consider the first suicide in Belgium connected to a chatbot), along with challenges in protecting vulnerable populations like children, concerns about job displacement, privacy infringement, and algorithmic biases that may worsen existing inequalities.
In the medium term, AI has the potential to contribute to economic disparities, the development of unethical autonomous weaponry, and the destabilization of democracy due to disinformation, such as deepfakes.
And in some more extreme long-term scenarios (see conversation between Lex Fridman and Eliezer Yudowsky) we could lose control over these systems if the AI race continues without safety precautions, and if values are misaligned then we might face existential risks.
Crucially, societies should evaluate these risks against a counterfactual. Any event with even the tiniest potential to eradicate humanity entirely would surely have a value function that essentially approaches infinity. To comprehend these situations, we must compare the available options against inaction. Currently, inaction encompasses unresolved climate change, child poverty, child mortality, wars, and more. If AI is believed to help address some of humanity's most pressing challenges or improves our lives more broadly then we can weigh the risks of taking action (on AI) against the risk of doing nothing.
These choices must be addressed through democratic processes, and for any productive democratic discourse, the general public needs to understand the decisions that societies currently face and the risks associated with different options.
If the general public, who will ultimately have significant stakes as regular users of AI systems, knowingly or unknowingly (Keely et al. 2020), cannot form risk perceptions grounded in a solid understanding of potential future scenarios and a value system encompassing AI usage, it is probable that both risk assessment and risk management will not align with public interests.
So how does the general public view AI?
The World Economic Forum published research conducted by Ipsos in 2022 to understand public perceptions on AI across the world. Perhaps surprisingly, a large proportion (64%) stated that they have a good understanding of what artificial intelligence is, but 39% stated that products and services using artificial intelligence make them nervous.
Participants expected a significant proportion of aspects of life to improve due to AI: from education, to entertainment, transportation, shopping, safety, the environment, food, income. Only 3 categories were perceived to be worse by a larger amount of participants than those thinking it would be better: employment, cost of living, and freedom/legal rights.
Notably, trust in AI is correlated with perceived understanding, and trust was higher in emerging than in high-income countries. On the topic of trust, Yigitcanlar, Degirmenci & Inken found that the public is concerned of AI invading their privacy, but not much concerned of AI becoming more intelligent than humans and that the public trusts AI in their lifestyle, but the trust is lower for companies and government deploying AI.
Another study with a German sample showed that those with less trust in AI sees the impacts of AI as more positive but less likely, compared to those with more trust in AI. The authors conclude that AI remains a black box for many.
Which factors make a public discourse on AI difficult?
There is a myriad of factors that might impact people‘s current levels of risk perception and ultimately the potential lack of an informed public discourse.
First, there is a large number of contextual factors. The topic of AI is complex, fast moving, and even when paying attention to it, it is so all encompassing that it is difficult for individuals to spend enough time on the topic to fully grasp it. The news space is also currently dominated by other big societal issues that there might simply not be enough headspace to engage with yet another set of issues that might bring more negative consequences.
In addition, trust in technology has historically been high and as much as data and privacy issues have been concerns in the past, we were never faced with risks such as wars being elicited between countries, or worse. Other technological changes such as the introduction of the internet or mobile phones happened also much more slowly, giving people potentially wrong perceptions of time frames and an inaccurate sense of our ability to act.
Apart from contextual factors, there are also cognitive aspects worth considering. Bilgin (2012) argues that the ability to imagine negative outcomes helps to shape our perception of the likelihood of negative events occurring – and when it comes to impacts on society or indeed risks, some outcomes are simply difficult to imagine. People are quick to draw parallels to sci-fi movies, which make the risk seem exaggerated. Construal level theory would also predict that risks appear smaller due to their nature of being distant, and abstract.
Underlying current risk perceptions are also mental models that seem open to being challenged. Some of these mental models might be around the way AI works (“couldn‘t we simply pull the plug?”) but also about assumptions of linear developments of risks. For example, there might well be an assumption that we would realise if a large language model started to ‘lie’ (or ‘hallucinate’ as it has also been called). When in reality there may well be a step-change, where the AI might initially lie very little, and then suddenly a great deal.
Finally, emotional factors will also shape the public reaction to potential risks. The affect heuristic (e.g. Finucane et al. 2000) predicts that risk perception follows feelings – and in the case of recent developments on AI and large language models in particular, experiences have largely been positive (see here for a study by Noy & Zhang (2023) demonstrating an increase in productivity and job satisfaction). The positive experience could partially also come from beginnings to anthropomorphize AIs, who act human like in their friendliness and empathic responses. It can be an emotionally difficult leap to assess the potential risk from a distance.
Given some contextual factors such as the current political circumstances, it might be difficult to bring the general public into a discourse on the risks and impacts of AI. However, such a discourse is surely important, even if a much smaller magnitude of change through AI than anticipated will come to fruition.
The White House has recently published a Blueprint for an AI Bill of rights, which highlights important principles to guide the design, use and deployment of automated systems – such as the right to opt out from automated systems and solve your issues with a real human. This certainly appears to be a step in the right direction, and can serve as a base for public engagement.
We acknowledge that it is difficult to demonstrate certain risks and elicit public values as so much is unknown, and we might be in what has been called “deep uncertainty”. Nevertheless, ensuring there are discussions about potential outcomes for citizens as a result of latest AI developments will be important. Education and awareness should focus on how AI will impact our lives and societies, and what risks we may trade-off against potential benefits (and against doing nothing).
In addition, a lot of current risks have been pointed out by a very demographically constrained part of society – with likely a bias towards a male, white, well-educated and wealthy demographic. Therefore, the focus should not merely be on top-down education and awareness raising. This should be a consultative approach, to involve the public in red-teaming to identify risks that cannot be seen through the lens of a technology professional.
Such engagement could take the form of public forums, workshops or online discussions to facilitate a dialogue between experts, policy makers, citizens, with a focus on marginalized or under-represented groups.
In this vein, Alondra Nelson, co-author of the Blueprint for an AI Bill of Rights mentioned the increased need for public consultations including amplifying voices of those who are not experts. She compares the need for consultation to that of social housing, where non-experts but those impacted by the policies are able to voice their views, needs, and values.
And in order for such consultations to be successful, we should ensure that we understand psychological and sociological factors that impact and shape people’s perception of risks, an area where applied behavioural science will be of increasing importance.
Subscribe to get the latest opinion and insights on the big topics of the day with a behavioural lens