Lachlan's Research

Round-up of Recent Research Papers

There have been a few research papers released since my last blog post so thought it was about time to do a round-up of the links. Get in touch to let me know your thoughts on  any of them!

[1] Title: Avoiding the Internet of Insecure Industrial Things

Authors: Lachlan Urquhart and Derek McAuley

Journal: Computer Law and Security Review [Elsevier] [PDF – Open Access]
Screen Shot 2018-02-06 at 17.23.58

Date: Jan 2018

Abstract: Security incidents such as targeted distributed denial of service (DDoS) attacks on power grids and hacking of factory industrial control systems (ICS) are on the increase. This paper unpacks where emerging security risks lie for the industrial internet of things, drawing on both technical and regulatory perspectives. Legal changes are being ushered by the European Union (EU) Network and Information Security (NIS) Directive 2016 and the General Data Protection Regulation 2016 (GDPR) (both to be enforced from May 2018). We use the case study of the emergent smart energy supply chain to frame, scope out and consolidate the breadth of security concerns at play, and the regulatory responses. We argue the industrial IoT brings four security concerns to the fore, namely: appreciating the shift from offline to online infrastructure; managing temporal dimensions of security; addressing the implementation gap for best practice; and engaging with infrastructural complexity. Our goal is to surface risks and foster dialogue to avoid the emergence of an Internet of Insecure Industrial Things.

[2] Title: Building accountability into the Internet of Things: the IoT Databox model

Authors: Andy Crabtree, Tom Lodge, James Colley, Chris Greenhalgh, Kevin Glover, Hamed Haddadi, Yousef Amar, Richard Mortier, Qi Li, John Moore, Liang Wang, Poonam Yadav, Jianxin Zhao, Anthony Brown, Lachlan Urquhart, Derek McAuley

Journal: J. of Reliable Intelligent Environments [Springer] [PDF – Open Access]

Date: Jan 2018

Screen Shot 2018-02-06 at 17.21.34

Abstract: This paper outlines the IoT Databox model as a means of making the Internet of Things (IoT) accountable to individuals. Accountability is a key to building consumer trust and is mandated by the European Union’s general data protection regulation (GDPR). We focus here on the ‘external’ data subject accountability requirement specified by GDPR and how meeting this requirement turns on surfacing the invisible actions and interactions of connected devices and the social arrangements in which they are embedded. The IoT Databox model is proposed as an in principle means of enabling accountability and providing individuals with the mechanisms needed to build trust into the IoT.

[3] Title: Demonstrably Doing Accountability in the Internet of Things

Authors: Lachlan Urquhart, Tom Lodge, Andy Crabtree

Journal: Under Review [currently on arXiv] [PDF]

Date: Submitted Dec 2017

Screen Shot 2018-02-06 at 17.26.53Abstract: This paper explores the importance of accountability to data protection, and how it can be built into the Internet of Things (IoT). The need to build accountability into the IoT is motivated by the opaque nature of distributed data flows, inadequate consent mechanisms, and lack of interfaces enabling end-user control over the behaviours of internet-enabled devices. The lack of accountability precludes meaningful engagement by end-users with their personal data and poses a key challenge to creating user trust in the IoT and the reciprocal development of the digital economy. The EU General Data Protection Regulation 2016 (GDPR) seeks to remedy this particular problem by mandating that a rapidly developing technological ecosystem be made accountable. In doing so it foregrounds new responsibilities for data controllers, including data protection by design and default, and new data subject rights such as the right to data portability. While GDPR is technologically neutral, it is nevertheless anticipated that realising the vision will turn upon effective technological development. Accordingly, this paper examines the notion of accountability, how it has been translated into systems design recommendations for the IoT, and how the IoT Databox puts key data protection principles into practice.

[4] Title: Accessing Online Data for Youth Mental Health Research: Meeting the Ethical Challenges

Authors: Elvira Perez Vallejos, Ansgar Koene, Christopher James Carter, Daniel Hunt, Christopher Woodard, Lachlan Urquhart, Aislinn Bergin, Ramona Statache

Journal: Philosophy and Technology [Springer] [PDF – Open Access]

Data: Oct 2017

Screen Shot 2018-02-06 at 17.20.45Abstract: This article addresses the general ethical issues of accessing online personal data for research purposes. The authors discuss the practical aspects of online research with a specific case study that illustrates the ethical challenges encountered when accessing data from Kooth, an online youth web-counselling service. This paper firstly highlights the relevance of a process-based approach to ethics (Markham and Buchanan 2012) when accessing highly sensitive data and then discusses the ethical considerations and potential challenges regarding the accessing of public data from Digital Mental Health (DMH) services. It presents solutions that aim to protect young DMH service users as well as the DMH providers and researchers mining such data. Special consideration is given to service users’ expectations of what their data might be used for, as well as their perceptions of whether the data they post is public, private or open. We provide recommendations for planning and designing online research that includes vulnerable young people as research participants in an ethical manner. We emphasise the distinction between public, private and open data, which is crucial to comprehend the ethical challenges in accessing DMH data. Among our key recommendations, we foreground the need to consider a collaborative approach with the DMH providers while respecting service users’ control over personal data, and we propose the implementation of digital solutions embedded within the platform for explicit opt-out/opt-in recruitment strategies and ‘read more’ options (Bergin and Harding 2016).


Gikii 2017: Hunting for Ethical Innovation in the Adventures of Rick and Morty

Last week I had the pleasure of speaking at the annual Gikii workshop, this year hosted in England’s historical capital, Winchester. Since 2006, Gikii has had IT lawyers and technologists come together to spin novel arguments that fuse together legal, technological and popular culture perspectives. Papers have plenty of LOLCats, sci-fi refs and a serious point at their core. This year’s Gikii was no exception, with many wonderfully nerdy indulgences of the community on display. Gikii was the first conference I ever attended in Goteborg back in 2011. At the time, I thought, are all conferences like this? How naive 😉

After a 2 year break, (last year Gikii coincided with Ph.D. submission day so I couldn’t go) it was good to be back. As ever, all presentations were great and highly entertaining, but a few stuck out in my mind:

  • Michael Veale’s Harry Potter Inspired “Getting pensive about the Pensieve: Governing memory data from Brain-computer interfaces.”
  • Tristan Henderson’s “Everybody Gets an AI”
  • Daithi MacSitigh’s “Peng ting called disruption: representations of the sharing economy”
  • Edina Harbinja’s “Post-mortem Privacy: From creepiness and silliness to The Daily Mail.”
  • Reuben Binns, “Freaky Friday, body-swapping and borrowed identifiers: where is the data subject?”.
  • Damian Clifford and Jef Ausloos’s “Technobabble and Technobulls*t – what the hell is everyone on about?”
  • Roxana Bratu, “What makes a hero? Re-enacting social drama in corruption and cybercrime.”

My own presentation was called “Hunting for ethical innovation in the adventures of Rick and Morty.” The basic premise was to examine some ethically dubious inventions created by the show’s protagonist, Rick, and critique them from a responsible research and innovation perspective. Below you’ll find my thoughts on the topic – hope you enjoy 🙂

Hunting for Ethical Innovation in the Adventures of Rick and Morty, Gikii 2017

“Sometimes science is more art than science, Morty. A lot of people don’t get that.”

                                                                      Rick Sanchez, Rick and Morty, Episode 6, Season 1

Who are Rick and Morty?

Adult Swim’s cult sci-fi cartoon Rick and Morty[1] is not only entertaining to watch but gives us many occasions to question the nature of scientific innovation. Rick Sanchez, a nihilistic, archetypal mad-scientist, co-opts his naive, shy, reluctant grandson, Morty Smith, into a multitude of adventures across space and time. Despite being the ‘smartest man in the multiverse’ many of his inventions give pause for ethical reflection. Rick is why the RRI agenda exists…his work wouldn’t get through (m)any ethics boards.

Sampling Rick’s Inventions

In “Anatomy Park” (see the clip here) we see fruits of Rick’s biological engineering efforts when Morty is sent in to fix problems with his newly finished microscopic theme park built inside a homeless man. It features dangerous virus exhibits and rollercoasters that flirt with trademark infringement like “Spleen Mountain” and “Pirates of the Pancreas”. Some ethical concerns in this episode include:

  • Rick’s coercion of an unwilling lab assistant – Morty is shrunk down and injected into Anatomy Park without much say in the matter. Rick is showing little respect for the autonomy and dignity of his “colleague”;
  • Risky bio-engineering – the park features highly contagious germ exhibits including bubonic plague, which is valuable and is due to lack of safeguards is being smuggled out by an insider threat working in the park;
  • Vulnerable test subject – Rick’s choice of a homeless drunk man (Ruben) is questionable as the research subject… that is before getting to questions of Ruben’s capacity to consent to medical processes?

Similarly, in “Lawnmower Dog” (see the clip here) Rick creates a headset that increases the family dog’s intelligence to stop him urinating on the carpet. However, Snuffles soon becomes self-aware, forming an army of cyborg dogs that eventually take over the world. Ethical issues here include:

  • Robotic augmentation being tested on live animals – Rick’s lack of safety safeguards are pretty considerable. A key trigger in Snuffle’s desire for world domination is learning of his castration and the subsequent trauma this causes.
  • Skewed innovation cost/benefit analysis – Rick’s motivation is to provide a way to help improve the dog’s intelligence and ultimately keep the carpet clean… he ends up creating a tool that subjugates the human species to their new dog overlords.

Responsible Research and Innovation

As we can see, Rick’s actions are often rather ethically questionable but Morty is there to correct for his moral bankruptcy.  Back in this reality, there has been a movement to introduce greater responsibility into scientific innovation. Multiple frameworks have been drafted that eschew the need for greater stewardship of the future, by thinking about the social, ethical and legal implications of today’s inventions. These include: value sensitive design[2]; RRI[3]; Real-Time Technology Assessment[4]; anticipatory governance[5]; Privacy[6]/Surveillance[7]/ethical[8]/social[9] impact assessments; computer[10] and engineering[11] ethics. Through structured reflection, scientists/researchers need to establish and forecast risks, put in place safeguards and mitigating measures to make science and innovation, as an institution, more societally conscious. Rick clearly missed this memo…

Throughout the series, his inventions challenge foundations of ethical science and design, and in this Gikii paper, we narrow down onto four in more detail, namely a vole DNA based love potion; a ‘Meeseek’ personal assistant box; an AI defence system and the ‘Microverse’ spaceship battery.

Invention 1: Microverse Spaceship Battery (clip here

In the episode “The Ricks Must be Crazy”, we discover Rick has created an entire universe (Microverse) that acts as a battery to power his spaceship. Inhabitants of this universe are required to create power by stomping on power generation boxes. When the ship breaks down, Rick and Morty enter the battery (posing as ‘aliens) to discover a local scientist has created their own ‘Miniverse’ in the ‘Microverse’, to generate power using the same ‘universe in a box’ process. Delving down one level further, the ‘teeniverse’ within the ‘miniverse’ is trying to do the same again. Each universe unaware they exist purely to power Rick’s spaceship and charge his mobile phone. Upon discovery, revolt and destruction ensue.

Ethical Issues (mainly around the values in the design)

  • Slave Labour – Rick (and the other scientists) creating universes to enslave entire populations for energy generation is disturbing and shows little regard for the human rights of their citizens.
  • Trust and deception – Rick tries to trick the local populations into believing he and Morty are aliens to create a power asymmetry founded on deception. The citizens worship Rick, erecting statues in his name in their public squares.
  • Psychological well-being – When the inhabitants realise that their entire universe is created to charge Rick’s car and phone, this causes understandable psychological harm and distress.

Invention 2: Keep Summer Safe (clip here)

Rick’s granddaughter/Morty’s sister, Summer, is left behind in the ship with an onboard AI instructed by Rick to “Keep Summer Safe”. The ship AI takes increasingly disturbing approaches to satisfy this simple command. Laser-based violence targeted at curious passers-and psychological tactics towards local police are two examples. As Summer becomes more shocked, she puts greater limitations on what the AI can do, leading to more elaborate responses to keep here safe. This culminates in the ship brokering a peace treaty between warring human and spider population on this planet.

Ethical Issues:

  • Lack of Transparency in AI – Summer only realises the literality of the AI system once it starts trying to keep her safe. As she realises the unethical lengths it goes to, she responds with safeguards e.g. don’t kill anyone, don’t use psychological torture. Rick hasn’t hardcoded any of these safeguards into the code (as a responsible IT professional might), and the harms only emerged through use in the ‘real world’. The lack of foresight is exacerbated by lack of transparency.
  • Use of torture/Psychological trauma – The creepy methods taken by the AI are disproportionate to the risk posed to summer, causing significant trauma to anyone who crosses its path.

Invention 3: Meeseek Personal Assistant Box (clip here)

  • In the episode “Meeseeks and Destroy”, tired of helping with mundane tasks around the home, Rick creates a box that generates bright blue humanoid assistants designed to help with one task alone. Once it is completed they cease to exist, turning into a puff of smoke. The Meeseeks don’t like to live too long, but when asked by Rick’s son in law, Jerry, to help with the ostensibly simple task of improving his golf swing, they struggle to do this. The episode soon escalates as hundreds of Meeseeks are generated to work on this task, quickly descending into madness and desperation as they cannot complete their goal.

Ethical Issues:

  • Environmental Sustainability – The Meeseek’s only have a temporary existence, which raises questions about how sustainable they are. What are they made of and is it environmentally harmful? Their unnatural blue colour and the puffs of smoke they make when they cease to exist suggests some sort of chemical mix… we now have questions about pollution long-term impacts on air quality to consider.
  • Public Safety – When the Meeseeks can’t complete their task, they get frustrated and eventually take hostages in a hotel in order to persuade Jerry to do what they want him to do. This raises clear concerns for ensuring Rick’s inventions don’t harm member’s of the public.
  • Universal Usability – In the episode, the Meeseeks are able to satisfy requests from other people quite easily (e.g. for Rick’s daughter Beth, and granddaughter Summer). The Meeseek’s box is thus not universally usable (as would be required of a good IT system), and some users will have issues to deal with in getting help from the Meeseeks.

Invention 4: Vole Potion 

In the episode “Rick Potion #9”, Rick creates a love potion made from vole DNA (furry rodents who mate for life) for Morty to take to his prom in the hopes of winning over his high school crush. It works, but the effects of the potion are spread by coming into contact with those who have the flu. Soon affection for Morty has spread to the entire town, and then the whole world. Such global love for Morty leads to jealousy and danger to his safety. Rick cooks up an antidote potion based on praying mantis DNA (among other things) but, predictably, this only makes matters worse. Spoiler alert, the episode ends with Rick and Morty hunting for an alternate timeline to live in, as their own one has been destroyed, inhabited by irreversibly mutated Cronenberg-esque monsters…

Ethical Issues:

  • Lack of foresight – Rick sees the impacts of his hastily created potion and antidote. However, with a little more foresight, (or lab testing) he could have avoided releasing his experiments upon the world, turning the entire population into mutants.
  • Lack of stewardship – in this episode, Rick and Morty still have an escape route, which impacts his (low) level of stewardship. Realising he cannot fix the problem, Rick’s solution is to abandon this universe and use his portal gun to transport to an alternate universe they can inhabit safely.
  • Respect for biodiversity – innovations are meant to respect other species and not cause harm unduly (eg genetically modified foods in the food chain and precaution about nanotech in the wild). In contrast, Rick destroyed all species and ecosystems across the planet, homogenizing all species into one mutated form.

As we can see, such storylines are rich in moral dilemmas, providing ample opportunity for reflection on the nature of ethical innovation.


[1] Rick and Morty IMDB

[2] Value Sensitive Design – ;

[3] Responsible Research and Innovation; ;

[4] Real Time Technology Assessment –

[5] Anticipatory Governance

[6] Privacy Impact Assessments – RFID PIA ;

ICO PIA code of practice

DP Impact Assessment – Article 35 General Data Protection Regulation 2016

[7] Surveillance Impact Assessments ;

[8] Ethical Impact Assessment – ;

[9] Social Impact Assessment

[10] Computer Ethics

[11] Engineering Ethics


ITU Copenhagen Visit, ThingsCon Salon & Techfest 2017

ITU Building

This week I had the pleasure of being invited to visit the IT University of Copenhagen to spend some time with the ETHOS lab and VIRT-EU project. I met Rachel Douglas Jones last year at a SATORI event in the UK, and we discussed the possibility of me visiting at some point, as there are a lot of crossovers between our research interests. So Rachel, Irina Shklovski and Ester Fritsch kindly welcomed me at ITU for the day. 


DJCPQgfXYAAD4r8.jpg-largeI presented my research in a talk called “MoralIT: Regulating the Domestic IoT” at an STS Salon. Different members of faculty came along, especially those from the Technologies in Practice research group. The team have a multidisciplinary approach, with expertise in anthropology, CSCW, HCI, data science and beyond. There was a nice discussion after the talk, with some critical questions about the relationship between law and HCI, which gave food for thought (before literal food at the rather nice Uni canteen).

Best danish pastry ever

After this, I visited Marie and Cæcilie at the Ethos lab to talk about their work with conversational agents, Google Home and Amazon Echo/Alexa. We considered some of the unique regulatory issues that might arise. I also attended a VIRT-EU project meeting, which gave me more insight into this timely project and its goals of creating a more ethical IoT future. There are clear parallels with the work we’re doing at Horizon so we’ll keep in touch and look for ways to collaborate in the future.  

In the early evening, we headed over to Techfest 2017, which is being hosted in Copenhagen this week. It is a massive festival with over 15,000 attendees featuring a plethora of summits, meet-ups, film screenings, art installations, ‘fireside chats’ and keynotes. 

Screen Shot 2017-09-07 at 11.35.07
Event Sign-Up

I was a speaker in a VIRT-EU organised ThingsCon Salon there, alongside Irina Shklovski, PI of the project, and Kasja Westman, a UX designer at Topp. It was very well attended with around 60 passionate and enthused members of the public coming along. As an aside, the venue was definitely the most ‘hipster’ place I’ve ever presented! A grungy, graffiti filled bar in a former warehouse in the Meatpacking District of the city, surrounded by gourmet restaurants! Not the usual seminar room in a University…

ThingsCon Salon Venue

After our brief presentations, the majority of the event was a public discussion on the challenges of building more ethical IoT systems. The audience was really engaged and we covered a lot of ground. No topic was off the table, from guarding against physical safety security risks from hacked ovens to how we can resist IoT and privacy harms in democratic societies. It was a really good experience and it was nice to see how everyone was so interested in this area. By the end of the Salon, there was a real sense of energy and motivation in the room. Afterwards, we went out for a lovely meal at a Spanish/Danish fusion restaurant (the food scene here is very vibrant!) to discuss the day. Thanks again to my hosts for a great visit!

Realising the right to data portability for the domestic Internet of things

A quick blog post to say my new journal paper written with Neelima Sailaja and Derek McAuley called “Realising the right to data portability for the domestic Internet of things” is now out (available in Open Access). It is featured in a special edition of Personal and Ubiquitous Computing on Privacy and the Internet of Things. The editorial from Alan Chamberlain, Andy Crabtree, Hamed Haddadi and Richard Mortier gives a useful overview of the edition, featuring the papers:

Screen Shot 2017-09-07 at 12.02.16

  • Peter Tolmie and Andy Crabtree – The Practical Politics of Sharing Personal Data (available here)
  • Thomas Pasquier, Jatinder Singh, Julia Powles, David Eyers, Margo Seltzer and Jean Bacon – Data Provenance to Audit Compliance with Privacy Policy in the Internet of Things. (available here)
  • Ilaria Torree, Odnan Ref Sanchez, Frosina Koceva and Giovanni Adorni’s –Supporting Users to Take Informed Decisions on Privacy Settings of Personal Devices.(available here)
  • Joseph Korpela and Takuya Maekawa’ – Privacy Preserving Recognition of Object-based Activities Using Near-Infrared Reflective Markers (available here)

The introduction to our paper begins… “Bringing the new right to data portability (RTDP) from an abstract legal provision in Article 20 of the EU General Data Protection Regulation (GDPR) 2016 into practice requires a greater role for the IT design community. Simply put, the RTDP seeks to empower users by giving them greater control over their personal data, en- abling them to both acquire their data and then move it around, for example to a different data controller. In this paper, we focus on how IT designers can use Privacy by Design (PbD) approaches to respond to these RTDP obligations. We are particularly interested in how the RTDP plays out for the technological context of the domestic Internet of things (IoT). By examining the legal, commercial and technical landscape around the RTDP, we can begin to unpack the practical roadblocks and opportunities ahead in implementing the right in practice”… to read on and find out more, click here.

ETHICOMP/CEPE 2017: Smart Cities

Time flies and it’s more than 18 months since the last ETHICOMP at De Montfort’s Centre for Computing and Social Responsibility in 2015. This time the conference moved further afield to the University of Torino, Italy, to be co-hosted with CEPE (Computer Ethics Philosophical Enquiry). From the 5-8th July I attended 4 days of panels, keynotes and plenaries at (what must be!) the biggest computer ethics conference in the world.

Train from Airport to Torino

It was great to catch up with familiar faces, and I was pleased to be part of the ICT and the City session organised by Prof Michael Nagenborg from 4TU/Twente University.

The full paper has just been published in a new journal, Orbit, that has emerged from a big EPSRC RRI project. The paper has also been nominated to be published in the ACM SIGCAS publication, Computers and Society, so hopefully, I’ll be able to link to that in due course too.

Screen Shot 2017-06-19 at 13.28.29


Before the workshop, Michael conducted interviews with different panellists about our work and published these on his website: Urban Technologies. My discussion there with him is a bit more detailed, but to summarise, this paper mainly focused on unpacking ethical dimensions of the role of designers in regulation. Using mediation theory, it discusses how designers can address concerns of citizens posed by smart cities. In sustainably scaling up IoT technologies to the city level, HCI designers can both engage with the needs of citizens and respond to these through the design of urban IoT systems (eg using participatory/co-design, value sensitive design etc). I explored how concepts from HCI, like ‘seamful design’, could be useful for surfacing the regulatory uncertainties inherent in future urban IoT management.



As is common at many conferences I’ve attended this year (especially BILETA 2017 and TILTing 2017), AI and algorithms continue to be hot topics.  Below I’ve provided a list of some personal highlights from across the conference.

Day 1:

  • The Law track – I enjoyed Burri et al’s paper on using legal personhood  (e.g. LLPs) as a mechanism to attach legal responsibilities to autonomous systems. They compared legal possibilities for using this route in different jurisdictions, namely the UK, Germany, Switzerland and Delaware, US.
  • The Fiction track – Johnson et al discussed using Design Fictions to engage with ethical dimensions of new technologies; Vallejos et al presented findings from the CRUCIBLE funded project ‘AI Goes to War‘; Adams and Ben-Youssef  presented their work on the interplay between superheroes narratives and security/policy debates (e.g. through Daredevil & Superman vs Batman).

Day 2:

  • Ethics in Software Development Track – Wolf et al examined the case of Microsoft Tay; Breems proposed ways to support longitudinal reasoning by software engineers about responsibility for artificial agents, linking initial action with future impacts.
  • Video Games Track – Flick discussed the construction of a code of ethics for in-game archaeological practices in No Man’s Sky; Neely explored the ethical interplay/disconnect between real world identities of players and in-game avatars (for example in World of Warcraft); Klein and Lin deconstructed the arguments underpinning the widely discussed Ban on Sex Robots campaign.

Day 3:

  • Both ICT and the City Track Sessions, with talks from:
    • Nagenbourg (chair) set the scene highlighting the need for smart cities to emerge as sites of citizen participation and engagement, attending to risks of urban surveillance;
    • Gonzalez Woge proposed learning from post-phenomenology and how agency is increasingly embedded in our environment (through ambient intelligence) using the example of open living labs;
    • Dainow suggested using autopoietic theories to reframe definitions and ethical implications of smart cities;
    • Heimo discussed ethical dilemmas of constructing mixed reality experiences for cultural heritage where insufficient historical information requires designers to take creative liberties to create an immersive experience, but at the expense of historical accuracy;
    • Lastly, Fichtner explored how the logic of optimising flows of knowledge underpinning smart cities can reduce spaces for creativity and citizens may find themselves experiencing the city through spatial filter bubbles.
  • I really enjoyed the keynote from Herman Tavani who surveyed shifts in computer ethics and highlighted the need to return to formal logic and critical reasoning in deconstructing arguments within computer ethics.

Day 4

  • Social Media Track: Tuikka et al provided detailed insights on ethics of netnography as a research tool; Koene et al unpacked the nature of editorial responsibilities social media platforms may owe to users due to personalisation algorithms (e.g. Facebook news feed curating content for individuals); lastly, Koga and Yanagihara explored ethical aspects of social media marketing, focusing on prominent case studies (eg Target pregnancy case; BabyFoot online competitions).
  •  The keynote from James Moor, who received the Weizenbaum Award, examined the future of computer ethics with challenges stemming from AI.

TILTing Perspectives 2017

Last week I was back over in the Netherlands for TILTing Perspectives Conference 2017. Hosted by Tilburg University at their Institute for Law, Technology, and Society, this was a 3-day event with around 200 presenters, 8 parallel sessions, 6 keynotes etc. I was over there presenting a WiP paper with Derek McAuley on Cybersecurity Implications of the Industrial Internet of Things.

Security incidents like targeted distributed denial of service (DDoS) attacks on power grids and industrial control system (ICS) hacks in factories are set to increase as infrastructure becomes increasingly connected. The short paper looks at where emerging security threats might lie as the industrial IoT trend gathers pace, both from engineering and regulatory perspectives. Vulnerabilities and threats around the smart energy infrastructure are used to consider where risks might arise at different points in the energy supply chain, from exploration through to consumption.

hacker-2300772_1920The Digital Oilfield‘ sees the integration of IoT into oil platforms, for example, to monitor integrity and performance of operational components. This opens new threat vectors for advanced persistent threats (APTs) and cyber espionage. The variety of organisations operating on a platform, sharing infrastructure but seeking confidentiality in their operations adds to the complexity of securing this domain.  Understanding how to make IoT components on rigs that are secure, but usable for workers is an important element, to minimise risks to safety or security of infrastructure through avoidable human error. Similarly, in a future of autonomous logistics, with oil tankers navigating the seas, new opportunities can emerge for GPS jamming or spoofing to enable remote piracy or ransomware attacks where the consequences are the environmental harm in addition to monetary loss. Perhaps most familiar are the challenges for IoT in the smart energy grid. Risks arise at many points in this supply chain such as in:

  • Energy generation with the hacking of industrial control systems in power plants.
  • Energy transmission /distribution across the power grid with DDoS attacks causing blackouts and knock on effects for services relying on power (hospitals, transportation etc).
  • Energy consumption where insecure domestic IoT devices can become parts of cybercrime infrastructure, particularly botnets and be used to leverage attacks against critical infrastructure (e.g. Mirai; Persirai; Hajime). The use of software agents to help manage dynamic energy tariffs on behalf of users (to enable peak levelling on the grid) is another emerging domain to consider threats to end users.

oil-rig-2205542_1920There are also regulatory changes afoot, with the EU Network and Information Security (NIS) Directive 2016 coming into effect at the same time as the GDPR in 2018. NIS brings in rules around securing critical infrastructure, including cloud platforms and establishes notification and cooperation requirements for responding to cyber attacks (e.g. role of member state computer emergency response teams). GDPR establishes obligations around personal data breach notifications, most relevant for domestic IoT/household energy management devices caught up in attacks. Balancing the growth of Industrial-IoT against the security threats and regulatory requirements is going to be a tall order. power-plant-2259713_1920Overall, industrial IoT brings four security elements to the fore that need to be managed:

  • Anticipating risks from bringing infrastructure online when it is ordinarily offline.
  • Managing infrastructural complexity where critical systems interact and share dependencies in ways that make it difficult to anticipate both threats and knock on effects of attacks.
  • Temporal dimensions of security, particularly how IoT risks are managed over the life cycle of systems, subject to organisational change, loss of IT support for platforms etc.
  • The implementation gap around best practice where standards for security in Industrial IoT are still emerging, and once settled, will still take some time to be actioned.

There is a working paper up on SSRN with more details, so any feedback on this is welcome!

From Leeds to Braga to Leiden

IMG_20170425_202357076_HDRThe past month has involved a bit more travelling to various conferences and workshops. It started in Leeds at a workshop on electronic monitoring on 6th April. This is part of the Tracking People Seminar Series, with this one focusing on ethical and legal debates. Personal highlights for me were talks by Mike Nellis discussing both technical and criminological dimensions of electronic tagging, and Michael Nagenborg, giving a philosophical discussion of the ethical aspects of tracking. It was nice to catch up with Michael again, as I’ll be participating in the ICT and the City stream he co-chairs at ETHICOMP 2017 in Turin next month, presenting the paper called “Ethical Dimensions of User Centric Regulation” (more on this in due course).

After the Easter break, I went to the main UK IT law event, BILETA 2017, which had gone on a sunny excursion this year to Universidade do Minho, Braga, Portugal. I was presenting a paper written with CDT student Neelima Sailaja and Horizon Director Derek McAuley on legal, commercial and technical challenges around realising the new EU Right to Data Portability in practice. This involved discussion around the legal importance of personal information management systems, like the Databox project, in realising the right.

Hosted at the Escola de Direito, it was 2 packed days of parallel sessions on fake news, algorithmic governance, post-mortem privacy, IP, living in smart cities, biometric criminal identification, and cybercrime. There were very enjoyable keynotes from Burkhard Schafer (hello again, viva examiner!) on law and algorithms, former Spanish Data Protection commissioner Jose-Luis Pinar on GDPR and Joe Cannatacii about his work as UN Special Rapporteur on Privacy. As an aside, the conference dinner had all the Portuguese ingredients of Bacalhau, Vinho Verde and Fado! Next year it will be in slightly colder Aberdeen, but will seek to combine the unusual mix of privacy, Haggis and a Ceilidh…

IMG_20170426_133527579After BILETA I flew over to the Netherlands for a week-long interdisciplinary workshop called Privacy by Design Beyond the Screen hosted at the Lorentz Centre in Leiden. This was an intensive event bringing together invited specialists on Privacy by Design from many backgrounds to discuss all aspects of the concept from theoretical framings to practicalities of doing PbD in practice. Kindly organised by Bert-Jaap Koops, Tjerk Timan and Jaap-Henk Hoepman in the Lorentz’s hospitable space (they invite proposals to host and fund workshops if you’re interested), the workshop involved a mix of presentations, group discussions and break-out sessions.IMG_20170424_093101794 I presented my PhD research on the concept of user-centric regulation, and over the course of 5 days, we discussed the pros and cons of many different ways of conceptualising PbD from more legalistic discussions around what notion of privacy is appropriate to design frameworks like value sensitive design and requirements engineering.
We also had some interesting discussions about the differences between PbD as a process and a product, with insights from product design too. As part of group work, we looked in depth at privacy implications of augmented reality glasses used by police officers to attend domestic violence cases. We have a lot of material to sift through but hopefully, a few short papers should emerge in due course!

New Directions in IT law: learning from HCI

Yesterday, my new journal article with Tom Rodden “New Directions in Information Technology Law: Learning from Human-Computer Interaction” was published in the International Review of Law, Computers and Technology. It is part of a special edition on law and algorithms edited by Joseph Savirimuthu from Liverpool University. Other articles in the edition consider accountability in algorithms, algorithmic surveillance, deep learning, and health wearables. The abstract is provided below and there are allegedly 50 free copies available at this link, so help yourselves and snap one up before they all go (around 20 left at last count) 🙂

Screen Shot 2017-03-29 at 11.45.44

Abstract: Effectively regulating the domestic Internet of Things (IoT) requires a turn to technology design. However, the role of designers as regulators still needs to be situated. By drawing on a specific domain of technology design, human–computer interaction (HCI), we unpack what an HCI-led approach can offer IT law. By reframing the three prominent design concepts of provenance, affordances and trajectories, we offer new perspectives on the regulatory challenges of the domestic IoT. Our HCI concepts orientate us towards the social context of technology. We argue that novel regulatory strategies can emerge through a better understanding of the relationships and interactions between designers, end users and technology. Accordingly, closer future alignment of IT law and HCI approaches is necessary for effective regulation of emerging technologies.

In other news, there are a couple of working papers up on SSRN looking for feedback if anyone feels so inclined 🙂 One is written with Neelima Sailaja and Derek McAuley on the new GDPR right to data portability, considering the legal, technical and business dimensions of realising the right in practice.  There is also a paper on artcodes and intellectual property law up there too, unpacking the copyright, trademark and design right dimensions (with a focus on creative commons licensing too). Lastly, but by no means least (!)  I passed my PhD viva with no corrections (just a few wee typos) with Burkhard Schafer (Edinburgh) and Derek McAuley at the start of March! So I’m feeling very pleased  🙂

New Paper: Ethical Dimensions of User Centric Regulation

turin_monte_cappucciniA new working paper has been added to Social Science Research Network called Ethical Dimensions of User Centric Regulation. This paper is set to be presented at CEPE/ETHICOMP 2017 in Turin, Italy later in the year, in the stream ‘ ICT and the City’.

We question the ethical role of information technology (IT) designers in IT regulation, unpacking the nature of their responsibilities. We illustrate our argument through the emerging technological setting of smart cities and use our concept of usercentric regulation (UCR) to consider what a closer alignment of IT design and regulation could mean in practice.

We situate how IT designers can respond to their ethical and legal duties to end users. Our concept asserts that human computer interaction (HCI) designers are now regulators  but as they are not traditionally involved in the practice of regulation, the nature of their role is ill-defined. We believe designers need support in understanding what their new role entails, particularly managing ethical dimensions that go beyond law and compliance.

We use conceptual analysis to consolidate perspectives from across Human Computer Interaction, Information Technology Law and Regulation, Computer Ethics, Philosophy of Technology, and beyond. We focus particular attention on the implications of designers mediating interactions of users with technologies and consider the distinction between intended and actual use, where regulation needs to accommodate for both.


Create a free website or blog at

Up ↑