Yes, we do in fact care about privacy
Personalisation and generative AI have propelled the issue of privacy back up marketers' and policy makers' agendas
To what extent do we really care about privacy anymore? It has been over a decade now since it was claimed that privacy is no longer a social norm and yet this debate is still swirling around. In fact, the issue is climbing back up the agenda with, (yes you guessed it), Chat GPT and generative AI generally has raised questions about how privacy is managed. Given these tools are used as another desktop app, then it is easy to see the way we may hand over sensitive information and put it in the public domain, inadvertently creating privacy violations in the process. For example, we may use the tool to review a draft policy document, and in doing so, it is now part of Chat GPT’s database and be included in responses to other prompts.
There has also been a great deal of discussion about the way that the UK’s forthcoming Online Harms safety bill is intended to assist in the removal of harmful internet content, but in doing so results in visibility of all content including end-to-end encryption within a private messaging or communications app. And not least the subject of personalisation, which requires a large amount of data to be effective, is moving back up the agenda.
How we understand privacy and what we assume to be the public attitudes and preferences are critical to a range of pressing issues with significant personal, societal, commercial and policy implications.
Given this, we need to first start with the question of whether we actually care about privacy.
Are we concerned about privacy anymore?
One of the most common arguments for dispensing with privacy is summed up by a familiar notion that ‘If you’ve got nothing to hide, you’ve got nothing to fear.’ Countering this, the author Aleksandr Solzhenitsyn famously pointed out:
“Everyone is guilty of something or has something to conceal. All one has to do is look hard enough to find what it is.”
This suggests that we all have at least some degree of regard for our personal privacy, illustrated by Daniel Solove when he invited responses to the ‘nothing to hide’ argument on his blog. The responses included “So do you have curtains?” and “Can I see your credit card bills for the last year?”, questions that most of us would answer in a way that suggest we do in fact take steps to guard our privacy in everyday life.
A recent paper by one of the leading figures in privacy research, Alessandro Acquisti, can help with some of the behavioural science unpacking on this topic. He considers the actions we take to protect our privacy are so pervasive that we tend not to notice or label them as privacy behaviours. Think of the way we lower our voices for intimate conversations, leave a group of people to take a personal call, take care over showing our naked bodies. And this is no recent thing, with Acquisti citing Irwin Altman who, writing long before the advent of the Internet, set out the case for our drive to protect our personal space being intuitive and universal across time and geography.
Indeed, if we look carefully, we can see how our drive for privacy translates into online behaviours: we move between different emails and logins to separate personal from professional spheres, manage privacy settings for social media posts, we respond to group messages individually, manage our camera by blurring the background and so on. There is also a host of survey data, observational studies and experiments that offer a huge body of evidence people care about privacy.
But there is also a bunch of complexity with this evidence, as there is also plenty of evidence of people not caring about privacy, such as choosing to give up personal data in exchange for minor convenience or small rewards. But does this necessarily contradict the argument that we also care about privacy? Acquisti suggests two reasons why this is not necessarily the case.
1. Boundary regulation
First, there is a danger that we oversimplify our conception of privacy – assuming it is a one-directional desire to withdraw, protect or conceal. Altman suggests a better way to think of privacy is as boundary regulation: this is a much more dynamic conception of the issue, encompassing both the opening and closing of the self to others.
This is similar to the Johari Window we have touched on previously where we have lines between what we know about ourselves and what others know about us. We balance and alternate between what we decide to keep to ourselves and what we are happy to share in response, all with reference to our needs and the context we find ourselves in. In fact, maintaining our privacy at all times is undesirable and simply not feasible.
This binary perspective is perhaps consistent with Ipsos findings that broadly equal numbers of people (globally) agree (45%) and disagree (43%) with the statement: “People worry too much about their privacy online. I’m not concerned about what companies or the government know about me”
2. Wants and opportunities
Acquisti offers a second reason our sharing of personal data is not necessarily contradicting our desire for privacy as, in reality, it is often difficult to manage the distinction between the personal and the private in online environments. Indeed, one survey found that a substantial proportion of internet users have tried to use private browsing mode when using the internet, but of these, about two thirds over-estimated the level of protection it offers. So while we may want to regulate our privacy this may not be matched by the effective means we have to achieve it.
Systems of privacy
This supports the point, as we have noted previously, that the wider systems we live in are a critical means by which our behaviours are shaped. With regards to privacy, this seems particularly difficult given the way that our personal data is not only more extensive than ever before, but is accompanied by statistical tools that are able to match this data across different domains and services. The public seem quite clear about this, with the same Ipsos study reporting that 81% of us globally agreeing that “It is inevitable that we will all lose some privacy in the future because of what new technology can do.”
An illustration of the challenge we face in managing our privacy is the way that personal information can be derived from non-sensitive personal data: for example, researchers were able to match political preferences and other potentially sensitive information to anonymous movie ratings of 500,000 subscribers of an online video streaming service. This is surely not something that we would be aware of, let alone know where to start managing what we disclose about ourselves. Even just knowing what can happen to our data and the implication for our privacy is very difficult for most people in online environments.
And whilst transparency and control seem like good solutions, there is a great deal of research that these can in fact be used to influence more self-disclosure than people would otherwise choose. There are no easy answers here as we live in an age of ‘Surveillance Capitalism’ where the incentives are often tipped towards facilitating disclosure rather than privacy, making effective boundary regulation difficult to achieve.
Privacy paradox?
To try and draw this together, what can we say about the apparent ‘privacy paradox’? This is where a gap appears between the desire for privacy and our actual behaviours – an example of the ‘say-do’ gap that is often referenced in behavioural science.
It should hopefully be clear from the above that this is not as straightforward as it first appears, but we can add some points.
The ‘say-do’ gap is often used to paint a notion of humans as ‘irrational’, and unable or unwilling to express what we really think. But there is a flaw here that can be easy to overlook: asking about broad attitudes and then comparing this to specific behaviours does not hold up to scrutiny. There will be a whole host of reasons why a move from the general to the particular may be a fair comparison.
Nevertheless, there remain examples of where specific mental states towards privacy do not seem to translate to specific behaviours. For example in one study involving a mobile phone app, despite participants reporting concern that there was too much monitoring and collection of personal data, nearly one third still downloaded the app (with 49% of the downloaders deeming it to be intrusive).
What should we make of this? Simply because there are some examples of when behaviours do not match self-reported preferences, this does not mean that it calls into question all consumer demands for privacy. Context is important in human behaviour and particularly so with regard to privacy. People are not always able to match their preferences with actions they are able to take and how we choose to manage our boundaries is hugely complex.
Privacy disparity
Of course, not everyone is subject to the same privacy challenges as others. Virginia Eubanks reports that welfare recipients are more vulnerable to surveillance, being seen as appropriate target for intrusive programs. She also reports that marginalized groups are in the dubious position of being subject to the cutting edge of surveillance in law enforcement, the welfare system, and the low-wage workplace.
Carl D. Schneider links a lack of privacy for these groups to a sense of shame, suggesting there are many areas of human activity where privacy is related to dignity, leaving an individual vulnerable to being reduced to mere bodily existence. Again this suggests human relationships require a pattern of mutual and measured self-disclosure and respect for others and the negative impacts of failure to do this are well documented.
Misinformation
Another important consideration is the link between privacy and online disinformation. Both advertising and disinformation require the wide availability of personal data to micro-target messaging for precise audiences so that materials will have greatest impact.
Given that personal data typically includes information like political beliefs, age, location, and gender, then limiting access to personal data through greater privacy, means that disinformation is a ‘weapon without a target’, with a higher chance of being lost in the noise.
Conclusions
Privacy is a huge and complex area, but a central theme is that we are active agents in managing access to our personal space. On this basis, philosopher Michael Lynch sees privacy as being “intimately connected to what it is to be an autonomous person”, for without privacy:
“I learn what reactions you will have to stimuli, why you do what you do, you will become like any other object to be manipulated. You would be, as we say, dehumanized.”
And as he goes on:
“when we lose the very capacity to have privileged access to our psychological information — the capacity for self-knowledge, so to speak, we literally lose our selves.”
On this basis, privacy is deeply entwined with the concept of ‘self’, as reflected on by Hannah Arendt in her book, ‘The Human Condition’ which sets out what she considered to be the value of privacy:
it guarantees psychological and social depth, protecting those aspects of ourselves that are vulnerable to the constant presence of others
it supports the public by establishing those boundaries which fix our identity;
and it preserves the sacred and mysterious spaces of life.
With this in mind we can see how privacy is central to our political, social and cultural lives. At times it is simply difficult to see this clearly but an example of what happens when this is denied us is the former German Democratic Republic (DDR). Here it is estimated that one in 6.5 members of the population were acting as informers for the Ministerium für Staatssicherheit or Stasi. Author Anna Funder suggests that these extreme levels of state surveillance reduced the ability of young people to establish their own identity, making them either passive and compliant or angry and subversive.
The notion of boundary regulation is a useful way of thinking for marketers and policy makers to better understand privacy: it helps us to knit together the way in which we both wish to disclose and keep to ourselves, how we want personalisation but also find it uncanny, that we want protection from wrongdoing but at the same time may not want this to involve intimate surveillance of ourselves.
One thing is surely clear with regard to privacy: attempts at generalisations should be regarded warily and replaced with a more thoughtful and researched understanding of the intimate sensitivities of the specific issues at hand.