Restricted and Redacted: Where now for human rights and digital information control?
Last November the Information Law and Policy Centre Annual Lecture and Workshop brought together a wide range of legal academics, lawyers, policy-makers and interested parties to discuss the future of human rights and digital information control. Paul Magrath from ICLR was there. The papers from the workshop have recently been published in a special edition… Continue reading
Last November the Information Law and Policy Centre Annual Lecture and Workshop brought together a wide range of legal academics, lawyers, policy-makers and interested parties to discuss the future of human rights and digital information control. Paul Magrath from ICLR was there. The papers from the workshop have recently been published in a special edition of Communications Law, vol 22.1 (2017).
The following summary of the day’s discussions was written up afterwards on the Information Law and Policy Centre blog and is reproduced here with kind permission of the editor.
Information Law and Policy Centre’s annual workshop highlights new challenges in balancing competing human rights
A number of key themes emerged in our panel sessions including the tensions present in balancing Article 8 and Article 10 rights; the new algorithmic and informational power of commercial actors; the challenges for law enforcement; the liability of online intermediaries; and future technological developments.
The following write up of the event offers a very brief summary report of each panel and of Rosemary Jay’s evening lecture.
Morning Session
Panel A: Social media, online privacy and shaming
Helen James and Emma Nottingham (University of Winchester) began the panel by presenting their research (with Marion Oswald) into the legal and ethical issues raised by the depiction of young children in broadcast TV programmes such as The Secret Life of 4, 5 and 6 Year Olds. They were also concerned with the live-tweeting which accompanied these programmes, noting that very abusive tweets could be directed towards children taking part in the programmes.
James and Nottingham noted that in recent legal cases where the balance between Article 8 and Article 10 rights has been considered courts have ruled that there should normally be an “appreciable benefit for the child” to justify an invasion of privacy. They questioned whether the broadcast of these programmes could be considered of “appreciable benefit” to the children depicted.
More generally, they highlighted that legal provisions for children’s privacy were not always explicit in legislation and questioned whether children’s privacy should depend on the conduct of their parents. They recommended appointing a children’s commissioner for media, broadcasting and the internet as well as tighter ethical and regulatory review of such programmes.
The next two papers in the panel spoke to the dilemma of balancing effective law enforcement to protect the privacy of (vulnerable) individuals, while maintaining the liberal principles of freedom of expression. On the one hand, Maria Run Bjarnadottir (Ministry of the Interior in Iceland, University of Sussex) spoke to the difficulties facing national law enforcement in intervening in revenge porn cases due to the jurisdictional limits of policing the internet. On the other, Tara Beattie (University of Durham) highlighted how aspects of legislation on online pornography could be discriminatorily directed against groups regarded as outside heterosexual norms.
The issue of ‘consent’ undoubtedly lies at the heart of the tension: while those who are victims of revenge porn have not consented to the distribution of their images online, many of those who engage in alternative sexual activities online will be consenting adults. Although exploring and understanding ‘consent’ might, therefore, assist in formulating policy and law in this area, it remains problematic in a number of respects. Moreover the law currently still prohibits a variety of activities whether or not ‘consent’ is obtained.
David Mangan (City, University of London) closed the panel by considering how the concepts of authorship and audience might help adjudications in social media defamation cases. Arguing that these concepts have been only loosely engaged in more traditional press cases, he suggested that dissecting the source and form of authorship as well as the intended audience of a social media communication might help balance competing interests of protecting reputation and defending freedom of speech.
Panel B: Access to Information and protecting the public interest
Meanwhile, a parallel session considered the ways in which the public access information and methods of protecting public interest information. The speakers presented three diverse papers which approached the theme from different legal and methodological perspectives. Ellen Goodman (Rutgers University), speaking on Skype from the United States in the early morning after the Presidential election, drew our attention to the way in which the rise of big data and algorithmic processes is presenting obstacles to citizens’ ability to know. Giving examples of risk and quality ratings in US health and education services she set out the pros and cons of government use of algorithms. While they can be more efficient, there is also the risk of introducing systemic bias and error into public data/information. Transparency of these processes is crucial, but so far freedom of information requests for algorithms and algorithmic processes have been denied.
The next paper, given by Vigjilenca Abazi (Maastricht University), considered whistleblowing in Europe. Whistleblowers increasingly contribute to the disclosure of some of the most severe power abuses and there has been a shift towards a more positive public perception of whistleblowing. Legal reforms for the protection of whistleblowers have not followed however. Some EU member states offer more advanced legal frameworks (Ireland, Romania, Slovenia and the UK) but often protection is fragmented or missing. In her view, we need a strong legal framework with clear requirements for protected disclosure that affords a wide protection to individuals who expose wrongdoing in the public interest. To this end, she was involved in drafting a new EU directive on whistleblower protection, presented by the Greens/European Free Alliance group in the European Parliament in May 2016.
Finally, Felipe Romero-Moreno (University of Hertfordshire) introduced us to the ‘notice and staydown’ concept in which a piece of content, once notified for removal by rightholders, is de-indexed forever and disappears from the internet. In his view, the use of content identification and filtering technology could pose a fundamental threat to human rights. He considered the compliance of ‘notice and staydown’ with the Strasbourg Court’s three-part, non-cumulative test, to determine whether a ‘notice and staydown’ approach is, firstly, ‘in accordance with the law’; secondly, pursues one or more legitimate aims included in Article 8(2) and 10(2) ECHR; and thirdly, is ‘necessary’ and ‘proportionate’. He concludes that ‘notice and staydown’ could infringe part one and part three of the ECtHR’s test as well as the three ECtHR principles of equality of arms, admissibility of evidence, and presumption of innocence, thereby violating internet users’ Article 6, 8 and 10 ECHR rights.
Afternoon Session I
Panel A: Data protection and surveillance
This session, chaired by Nora Ni Loideain (University of Cambridge), considered different aspects of privacy, data protection and surveillance. First, Jiahong Chen (University of Edinburgh) discussed issues of applicable law in the General Data Protection Regulation (GDPR). He explained that while the GDPR follows a similar pattern to the Directive with regard to territorial scope of EU data protection law, the Regulation itself, unlike the Directive, no longer addresses the issue of applicable national law. At the same time, the Regulation explicitly allows Member States to deviate from its default rules on dozens of specific matters. In his view, the removal of the rule governing applicable law will inevitably give rise to a serious conflict of laws, which is exacerbated by the fact that Member State laws have defined their own scopes in incompatible ways.
Next was Jessica Cruzatti-Flavius (University of Massachusetts) who gave us a very specific case study: the identity of sex trafficking victims, whose names are changed by the perpetrators of human trafficking, with ID numbers, bar codes and pimp names often branded on victims’ skin. After setting out some of the social and theoretical ideas that guide her research, she considered the legal issues in the context of European privacy rights. Among the questions she explored: what would be the case if someone’s personal data was completely and involuntarily removed from government information systems? Does the ‘right to a name’, implied in ECHR Article 8, the Right to Respect for Private and Family Life, provide for an appropriate cause of action against name erasure and rebranding?
It was back to data protection and the GDPR for Wenlong Li’s (University of Edinburgh) paper, which addressed a new Right to Data Portability (RDP) that enables individuals to ask for a copy of their personal data and transmit data to another data controller without hindrance from the first controller. Aspects of this new right raised difficult questions, he suggested. The RDP as an extreme form of the right of access may enhance the transparency of profiling in the era of big data, but may not adequately protect data privacy. In his view, an approach grounded in ‘property’ rather than data protection might be more suitable for conceptualising the RDP as a new human right.
The final paper, given by Ewan Sutherland (Wits University), reminded us of the broader surveillance debates — both past and present. He described the expansion of communications monitoring in the regulatory state — from intercepting post and telegrams to wire-tapping and call-logging, and now the harvesting of internet meta-data and mobile geo-location data. Drawing our attention to the complexities of market-driven technological development, globalisation and encryption techniques, he suggested that public understanding of wiretapping and the use of metadata are often based on novels and films, which often overstate capabilities.
Panel B: Technology, power and governance
Chaired by Chris Marsden (University of Sussex), Panel B considered issues of technology, power and governance in relation to new online dimensions of human rights. Monica Horten (London School of Economics) began the session by arguing that Susan Strange’s concept of structural power — traditionally understood in relation to nation states — also now applies to commercial internet companies such as Facebook and Google. As a consequence of their power over knowledge and security these companies are able to influence political choices without appearing to assert any direct pressure.
The role of commercial companies in activities which might previously have been understood as activities of the state was also emphasised in the final panel paper of this session by Allison Holmes (Kent University). She maintained that communication service providers could be providing “functions of a public nature”. For example, she noted that companies were being asked to retain data which is for the ‘public interest’ rather than for commercial activities — evident in the provisions of the Investigatory Powers Bill which compels communication service providers to retain communications data for a period of 12 months. As a consequence, she wondered whether these companies could be subject to Section 6 of the Human Rights Act.
In between these two papers, Marion Oswald (University of Winchester) presented research (with Jamie Grace, of Sheffield Hallam University) on whether UK police had adopted computational or algorithmic data analysis or decision-making in relation to the analysis of intelligence. She was surprised to discover that 86% of those police forces which replied said they were not using these algorithmic tools. She concluded that although these tools do not seem to be used as extensively as they are in the United States, there remained a potential for human rights infringements at a micro level.
The adoption of these tools is also likely to increase — particularly in cities and urban environments. In his paper, Perry Keller (King’s College, London) described cities as the epicentre of mass surveillance where citizens are subject to particularly intense surveillance. He argued that the subsequent “loss of anonymity, obscurity and personal privacy” must be counteracted by new forms of transparency and accountability. Algorithms which dictate surveillance must be subject to transparency in their design, content and data output, while effective connections must be made between “algorithmic based decision-making” and “democratic or regulatory forms of oversight”.
Afternoon Session II
Panel A: Intermediary Liability
Opening this panel via Skype, Judit Bayer (Miskolc University) considered the liability of internet intermediaries for user-generated content. She noted that the current legislative approach tends to impose obligations on intermediaries to interfere with user generated content, rather than focus on the right of freedom of expression for those who post user-generated content. She believed that intermediaries’ obligations should be supportive of this aim — they should guarantee equality by not discriminating on grounds of content and they should act transparently by opening up their content selection policies and algorithms.
Mélanie Dulong de Rosnay’s (CNRS-Sorbonne Institute for Communication Sciences) paper (with Félix Tréguer, CNRS-Sorbonne Institute for Communication Sciences and Federica Giovanella, University of Trento) considered how the CJEU Mc Fadden case on operators of open Wi-Fi affects the future of Community Networks in Europe. Community Networks have emerged as an alternative to commercial Internet Service Providers and they are potentially more participatory, democratic, and respectful of communications privacy. At the same time, the plurality of governance, economic and architectural models means they are proving challenging to regulate. Dulong de Rosnay argued that the case of Community Networks is representative of the structural tensions between parliament and the courts in the field of online human rights.
For the final paper of the session, we were transported to the other side of the world for a look at Australian approaches to search engines’ liability for defamation. David Rolph (University of Sydney) described this issue as a new dimension to the longstanding tension between freedom of expression and the protection of reputation. He argued that recent judicial decisions by the Australian courts call for greater attention to the concept of publication — a fundamental, but relatively unexplored element of defamation cases — particularly in view of the varying emphases given in different decisions to the notions of knowledge, risk and profit.
Panel B: Privacy and anonymity online
Gavin Phillipson (University of Durham) began this session with an entertaining and informative look at the PJS privacy injunction case. He argued that the Supreme Court decision in the case has rebalanced press freedom and privacy in a number of respects. In particular, he noted that the Supreme Court decision rules against a recent conflation — both by the press and in legal judgments — of a “right to criticise” with a “right to reveal private facts in order to criticise”. The case also clarified that widespread dissemination of private information online in other jurisdictions — such as occurred in the PJS case — does not render an injunction worthless. The Supreme Court ruling maintained that an injunction could usefully protect parties from further intrusion and further harm even if they were not necessarily capable of maintaining confidentiality or secrecy in the digital age.
In the second panel paper, Fiona Brimblecombe (University of Durham) considered rulings on Article 8 in the ECtHR in order to assess how the ‘right to be forgotten’ might be interpreted in Article 17 of the new General Data Protection Regulation. She outlined how the ECtHR considers ‘balancing factors’ — such as: “What constitutes intimate information?” “What format is the data in?”, “What is the prior conduct of data subject?” and “What were the circumstances in which the personal information was obtained?”.
In the final paper of this session James Griffin, (University of Exeter) and Annika Jones, (University of Durham), considered the future of privacy in a world of 3D printing. The premise of the paper was the potential for privacy breaches in the inclusion of hidden watermarks and embedded data in 3D printed objects. In the summer, around thirty interviews were conducted with Chinese 3D printing companies to ascertain their views on these privacy concerns. Staff interviewed recognised the potential commercial value of personal data and watermarking, although the collection of personal data and insertion of tracking technology was mixed. Interviewees were sensitive to the personal data that could be collected by 3D printing — particularly in the medical field. They were also aware that privacy was important in order to retain customer confidence and trust, and to ensure that 3D printed products complied with overseas regulation.
Evening lecture
This year’s annual lecture was given by Rosemary Jay, Senior Consultant Attorney at Hunton & Williams and author of Sweet & Maxwell’s Data Protection Law & Practice. She is currently working on a new book on the General Data Protection Regulation (GDPR), a companion text to Data Protection Law & Practice which is due to be published later this year. Jay’s lecture, chaired by Lorna Woods (University of Essex), and with responses from Andrea Matwyshyn (Northeastern University) and James Michael (IALS), provided a lively and entertaining journey through the GDPR’s provisions as they relate to biometric data — that is data relating to biology and identity.
Jay had given her lecture the light-hearted title of ‘Heads and Shoulders, Knees and Toes, Eyes and Ears and Mouth and Nose…’ and was a bit surprised, when preparing the lecture, to realise that in fact all of these bodily features can be used for unique ID. Our physical and physiological ID and our ‘behavioural ID’ (how we carry out common actions) is unique and can be used to identify us. Having set out some current uses of technology to monitor the body and their perceived risks she considered the EU’s approach to the narrower category of biometric data.
Touching on issues of consent, accountability and children’s rights — among others — she suggested that overall the EU GDPR regime was a strong one, albeit with many questions left to answer, including those relating to the ‘Transparent’ human or individual. This was the notion that advanced technology and data collection now means that we cannot hide certain information about us through our ‘social mask’ — for example, the DNA in our spit and the pace of our hearts. We are transparent humans. Some of this ‘transparent individual’ data may well fall outside the definitions of biometric, health and genetic data provided for in the GDPR.