Safety is more important than privacy

It’s time to use technology to detect potential threats and worry less about outdated ideas of privacy, says Ron Iphofen

四月 28, 2016
Door peephole painted as bomb ready to explode
Source: Getty (edited)

In 2013, I submitted a commentary piece to Times Higher Education that concluded as follows: “I am now of the opinion that privacy has to be sacrificed for the sake of security. What matters most in times of crisis? That people won’t be able to find out who we are, where we are and when? Or that by accurate surveillance, by technologically sophisticated watchfulness and by cautious tracking of anyone exhibiting suspicious behaviour, disaster can be averted, lives saved and misery avoided? It is a matter of finding the lesser evil. When the next multiple killing occurs, will the loved ones of those murdered be placated by the rationale that nothing could be done to prevent it since people’s right to privacy had to be protected?”

Those words never made it into the final article, which focused on the extent to which universities could and should monitor staff emails (“Do they see all @ac.uk?”, Opinion, 5 September 2013). But I am more convinced than ever that they should be heard given the recent instances of such multiple murders in Paris, California, Brussels and Lahore. I strongly suspect that, given the available technology, much more could have been done to anticipate and possibly even prevent those atrocities had it not been for the privacy-related “obstacles” placed in the way.

It is not that I don’t value privacy; it is rather that I do not expect that large aspects of my life can ever be considered private again. If privacy is not exactly “dead”, it is certainly staggering about uncertainly.

There have been a range of revelations over recent years vindicating the view that privacy cannot be ensured in the modern, technological age. These include the newspaper phone-hacking scandal; a bug in Facebook’s data archive exposing the personal details of about 6 million people; the “mistaken” collection of data by Google’s Street View equipment in 30 countries (including complete email messages, logging-in details and medical listings); and, of course, Edward Snowden’s whistleblowing of the automated interrogation of international communications by US and UK intelligence agencies. Some commentators, such as Kelvin Wade, have even announced that privacy is “a 20th-century concept”. And surveillance from all sources has grown rapidly in line with technological developments and the assumed rise in threats to public safety.

My own heightened awareness of the issue comes from acting as an ethics adviser in a European public transport security project, known as SECUR-ED. Surveillance has become routine in most public transport settings but privacy, human rights and data protection laws all mediate the relationship between the transport “authority” and the passenger. Personal data cannot be collected indiscriminately. It can be collected only to address a specific and identifiable problem and, even then, prior checks under local and national data protection regulations apply.

SECUR-ED alone was funded to the tune of more than €40 million (£32 million), and there are nearly 200 other security-related European Union projects. These involve academics from across the Continent and the disciplines, including ethicists, engineers, communications technologists, biomedical scientists and systems designers. But although ethicists and surveillance technologists can and do work hand in hand on many of these projects, the fundamental contradiction between privacy and security ensures that this remains an uneasy alliance. It is one that some human rights activists resist joining, preferring to remain in their “camp” to fight for privacy untainted by compromise. And no doubt some surveillers are content for them to do so and refrain from interfering in their work. One can only assume that both are high-mindedly seeking a free and safe society. It is just that they profoundly differ in the fundamental principles upon which they think such a society can be founded.

We typically take for granted our ability to walk down a familiar street at any time of day or night and not feel at risk of harm from others. Our complacency is challenged only when we learn that an unarmed soldier out of uniform can, for example, be hacked to death in broad daylight on the streets of London. Or that many innocent bystanders enjoying the finish to a marathon in Boston can be blown to pieces (with the whole event recorded on mobile phones). Yes, privacy is precious, but so too is our security. And when we ask the authorities to help us to attain both, we confront them with a dilemma. I want them to respect my privacy, but perhaps not the privacy of those planning to do harm to me or my community. I want them to keep me secure by securing those who threaten my security.

A 2013 editorial in The Guardian on the undercover investigation by the Metropolitan Police’s Special Demonstration Squad (SDS) into murdered black teenager Stephen Lawrence’s family complained: “This is the kind of thing that happens when, without adequate legal restraint, fears for security are allowed to take priority over privacy.” What needs establishing is when such “fears for security” (talk of which implicitly suggests overreaction) are unjustified; in other words, when and how the “adequate legal restraint” can be applied, and by whom. There is rarely a situation in which security can be protected without the expense of some loss of privacy.

Mounted CCTV camera casting shadow of automatic weapon
Source: 
Getty (edited)

It is obviously true that privacy was more easily protected in the past. But starting with the ability to steam open glued envelopes, through photographic reproduction and phone tapping to automated hacking devices and GPS tracking, as the technology of data handling has grown in sophistication, so too has the means to “interrogate” it. Now human rights advocates seek “privacy by design” for communications technology. But if techniques such as encryption exist, so too do the means to circumvent them. In the wake of the Edward Snowden affair, Apple made great play of its refusal to help the FBI hack into one of the San Bernardino shooters’ iPhones, claiming that to do so would potentially compromise the privacy of all its customers. But, in the end, the FBI reportedly gained access to the phone with the help of “professional hackers”. That point also highlights the fact that the means to invade privacy are not the exclusive preserve of well-intentioned state authorities. Human rights watchers refer routinely to the chilling effects of knowing that the state can invade our privacy in so many ways. Personally, I find it even more chilling that those whom I trust much less than the state may be doing that too.

One thing is certain: even if we restrict or ban “good” people from carrying out covert surveillance, there are plenty of “bad” people who will ignore such restrictions and carry on employing the technology for their own nefarious interests regardless. Quite simply, the technology is there and available for anyone to use. I have witnessed the use of behavioural tracking devices, facial recognition software and “sniffing” technologies that can detect explosive substances.

There are several types of technology that could have flagged up the Brussels airport suicide bombers for wearing gloves on only one hand, potentially allowing them to be intercepted before they detonated their devices. Indeed, if a generic surveillance technology of the sort that Snowden blew the lid on had been in operation, their intent might have been disclosed even before they donned their suicide belts. It is also worth remembering that investigative journalists and social science researchers (ethnographers or anthropologists usually) also use their own form of covert surveillance, subject to the limits imposed by the law and ethics committees, as well as social mores and their own consciences. Judgements about applicants’ real purpose and intent remain a standard problem for formal research ethics review – they can limit the work of the well-intentioned, while those less well-intentioned will deceive and dissemble to continue their activities regardless of the moral judgements of others.

The problem lies in the extrapolation of concern. Just because some people or agencies have abused their position does not mean that all surveillance is unfair, unjust or badly conceived. The undercover monitoring of the Lawrence family or the use of the identities of dead babies by undercover SDS agents merely illustrates how not to conduct such work. Whatever the concerns might have been at the time about race-related riots or criminal activity, it is clear that such approaches were not authorised at a higher governmental level – and the political checks and balances of a democratic society should generally limit the risks of such abuse.

It is noteworthy in this regard that there are many local variations about where people think the balance should lie regarding privacy and security. Views in Germany and France are much stronger than, say, in the UK about the right to privacy – as demonstrated by the fuss in the former over the use of graffiti-spotting drones by railway company Deutsche Bahn. Indeed, on an even more topical note, sensitivities in Panama about privacy must be particularly acute if the co-founder of the Mossack Fonseca law firm is at all representative of his countrymen. Responding to the leak of thousands of the firm’s documents, highlighting the huge extent of tax avoidance by the global elite, Ramon Fonseca complained that “there is an international campaign against privacy [which] is a sacred human right”. Then again, it is perhaps not surprising that a man in his line of work would hold to such a principle.

Ethics has always been about weighing harms and benefits. Regarding Snowden’s whistleblowing, for instance, the key questions would be whether the benefits of his actions (alerting the general public to breaches of their privacy) outweigh the harms (letting terrorists know that they are being watched). The judicious outcome is achieved when most people perceive the inevitable compromise, balancing the harms and benefits, to be tolerable.

There is one final point to make about the growth of sophisticated surveillance technology, whether covert or overt. That is that it appears to have almost entirely supplanted the old-fashioned undercover operative. Those fixated by the past excesses of the SDS may well be comforted by that, but it is one reason that terrorism is flourishing. The terrorists responsible for the Paris and Brussels atrocities were an intimate network that didn’t even need to communicate via hackable mobile phones: they lived in each other’s communities and houses. Only an embedded agent could have monitored them effectively. The ethics of such intimate infiltration remain complex to negotiate, but it is senseless and potentially suicidal to take the view that it is never justified.

There is no denying that effective counterterrorism can require enhanced state oppression, but terrorism is itself a form of oppression – and a much less democratic form. I, for one, am entirely happy to sacrifice some of my privacy rights to ensure my security and that of my loved ones.

Ron Iphofen is an independent research consultant. His chapter “Ethical issues in surveillance and privacy” appears in A.W. Stedmon and G. Lawson (eds) Hostile Intent and Counter-Terrorism (Ashgate, 2014).

后记

Print headline: Better safe than sorry

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.

Reader's comments (5)

Have you considered the opinion that by altering the balance of security/privacy you in effect hand terrorists a victory without putting them to the trouble of actually commiting a crime?
So what alternative would you propose?
Everything has private and public dimensions. Security either as prevention, protection or palliative has its privacy with a strategic secret that should not easily leak out to the enemy. Security and secrecy (privacy) are complimentary weapons of warfare. The mere fact that secrecy has suffered from the hands of hackers with advances in technology does not mean that the end secrecy is near. What that means is that a new security lesson has been learnt to strengthen secrecy against future technological attacks.
Most people would have less concern that GCHQ had the ability to read their emails than that their borough council was monitoring what they were putting in their dustbins. In other words most people are willing to confer extra-ordinary powers to deal with extra-ordinary threats but not to deal with routine criminality, disorder or non-compliance with bureaucracy.
But I disagree because the protection of privacy leads to safety. How?. Say for example you have a phone. It is priceless and very special to you. If you dotn tell anybody about the phne it will guarante you and your phones safety.
ADVERTISEMENT