The Right to be Forgotten

On Thursday of last week Google published an online form allowing people to apply to have links to information about them removed from the results of searches for their name. The form is Google’s response to the decision by the European Court of Justice to enforce the ‘right to be forgotten’; a right to have ‘irrelevant’ or ‘outdated’ information about oneself removed from the Internet, unless there is a public interest in keeping it there. In the first 24 hours after the form went live, 12,000 submissions (at a rate of 20 per minute) had been received.

The sheer number of requests is a headache for Google, but the real challenge for is philosophical, not bureaucratic: to balance individual interests in privacy against public and commercial interests in disclosure. Responsibility for decisions like this is usually devolved to courts and legislatures, not private companies. There are good reasons for this: courts and legislatures are accountable democratically – i.e. to voters- whereas private companies are accountable primarily to their shareholders. Passing accountability from the state to Google has provoked well-founded concern in many quarters, both about the wisdom of placing such a responsibility on Google’s shoulders and about the authority and legitimacy of its ultimate decisions.

Google appears to share these concerns and has appointed a panel, including philosophers and experts on freedom of expression, to spend the next 3 years considering how the company should balance privacy and freedom of information and expression. This move is encouraging. It indicates that Google recognises that its earlier strategy of digging in its heels and lobbying hard to prevent any regulation is not only objectionable ethically, but also ineffective and short-sighted. It signals a new appreciation that the issues are fundamental and persistent, and call for a long-term, visionary strategy. Indeed, the advisory panel’s resident philosopher, Professor Luciano Floridi, has said that he hopes to achieve no less than a ‘total rethink of basic freedoms’.

In the meantime, Google must comply with the court’s ruling, and the online form is a first attempt. Google says that the form is only a preliminary measure, and that it is likely to devise alternative means to comply with the ruling in the future. The sooner this happens, the better. The request-deletion-by-form method has a number of shortcomings. One is its reactive nature: people only apply to have information removed once it is already causing them embarrassment, professional difficulties, or other problems. Another is that it is likely to be disproportionate. Deletion is a blunt solution: it might be necessary in many cases, but in some it will go further than what is required to satisfy individuals’ privacy interests. High numbers of borderline cases – where the proportionality of deletion is questionable – are therefore inevitable, which means Google is likely to find itself back in the courts very soon. It would be far better, both for individuals seeking to protect their privacy and for Google itself, for the company to supplement the form with additional privacy-protecting solutions. It has a better chance of reducing both the harm to individuals and the burden on its own resources, if it addresses the problems the European Court tried to solve through the right to be forgotten before they lead people to make formal applications to have information removed.

What is the problem that the ‘right to be forgotten’ is supposed to solve? It’s not inaccurate, incomplete, libellous or privacy-invading information (for example, nude photos posted by an ex-lover, known as ‘revenge-porn’), because legal channels for removing such information already exist. It’s the availability of true but unwanted information that is also ‘irrelevant’ or ‘outdated’ for which there is no countervailing public interest to preserve it. There are many reasons why information about us might become irrelevant or outdated in ways that make it undesirable for it to continue to be as readily available as it has been. These different reasons naturally invite different solutions.

The first, most obvious reason is that the information relates to something in one’s life as a child or teenager. We can all agree that young people make mistakes and that these should – barring some exceptions – have no bearing on one’s adult life. The strong agreement on this fact is illustrated by the fact that in many jurisdictions offences committed by minors do not result in lasting criminal records. If we can agree that people should be given a chance to clean their slate when they reach the age of responsibility, then why not translate this into digital practice? Google could offer to delete upon request, with no questions asked, any links about a person listed under a person’s name that were posted before they turned 18 (or whatever age is considered reasonable for digital maturity). This would send a powerful message to young people that they can experiment, blunder, find their feet and mess it up again as often as they need to without having to be perpetually reminded of the fact or judged by others on the basis of it. And the blanket, automatic nature of the rule would relieve Google of the burden of trying to weigh interests in these cases.

Sometimes digital information that is true but outdated can damage our interests in ways that justify making it less prominent than it currently is. The case that produced the ECJ decision may well fall within this category. It concerns a Spanish lawyer, who objected to the visibility on a Google search of his name of a news item that was over 20 years old and reported the repossession of his home to recover social security debts, which he had long since repaid. The successful petition was for the link to be removed, but it is likely that many of the problems it caused for the individual as a practicing lawyer would have been solved by Google overriding its own algorithms to reduce its prominence. Why not allow people to apply for these lesser measures when such measures satisfy their interests in privacy?

In some cases, people may be satisfied with being given the chance to respond to the information included in links by ‘tagging’ them with explanatory or clarifying notes. A solution along these lines may help address problems such as the inevitable confusion and potential mistakes that result from searches for names shared by significant numbers of individuals. It may also give people the chance to correct in advance minor but nevertheless unwanted and potentially distressing misunderstandings that might otherwise arise.

Some of the problems people have with the way information about them is displayed may be just that – problems with the way it is displayed – rather than problems with its availability per se. For example, some people might like to keep their hobbies and professional lives very separate, but have reasons not to want to make digital information about either disappear. Google could enable information to be categorised and displayed accordingly – perhaps in the same way that it is now possible to search under the category of ‘shopping’ or ‘images’. This would not conceal information, but would keep search results within relevant professional or personal parameters.

As it stands, Google’s delete-or-nothing approach is so blunt a tool that it risks being disproportionately censorious on the one hand or inadequately sensitive to privacy concerns on the other. Devising different kinds of remedies for different kinds of privacy problems is more likely to prevent unnecessary intrusions into privacy and freedom of expression; empower people to take back some control over information about themselves; and – by giving Google the option of offering people requesting deletion a lesser remedy, which is better than nothing – avert legal action. The specific ideas presented above may turn out to be impracticable. But my aim in suggesting them is not to provide a comprehensive alternative to the current solution. Rather, it’s to invite some thinking about the practical measures that could address the problem we’ve only just begun to get to grips with: how to balance privacy and freedom of expression and information in an Internet age. Google’s newly appointed panel is in a uniquely privileged position to do this. They are scheduled to issue their first report in early 2015. Let’s hope it doesn’t disappoint.

Dr Katerina Hadjimatheou workKat Hadjimatheous on the ethics of criminal justice, especially police ethics, preventive policing, and surveillance. She is a Research Fellow on an EU-funded project on the ethics of surveillance in serious and organised crime entitled SURVEILLE. For that project she researches the ethics of data retention, preventive policing and profiling in border security. She previously was a research fellow on the FP7 DETECTER project where she has worked on the ethics of profiling in counter-terrorism.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.