Giles Pavey speaking at the PolicyLab. Privacy and bakewell tarts: what’s in store for you?

Giles Pavey speaking at the PolicyLab. Privacy and bakewell tarts: what’s in store for you?

Last week a panel of experts from industry, academia, ethics and the law, as well as an “esteemed audience”,* came together for a PolicyLab here at the Royal Society to tackle the question ‘Consumer data: what’s in store for you?

Our experts were Giles Pavey, Chief Data Scientist at dunnhumby, the customer science company behind leading supermarket loyalty card schemes, Paul Longley, Professor of Geographic Information Science and Director of the ESRC Consumer Data Research Centre at UCL, Baroness Onora O’Neill FRS, political philosopher and ethicist, and Jane Kaye, Professor of Health, Law and Policy and Director of the Centre for Health, Law and Emerging Technologies at Oxford (HeLEX).

In the age of information, data rules. Researchers and businesses thrive off your data – from the jeans you buy to the genes you have – and hopefully you, and society, get a good deal in return. But what if companies can use data to find out more than we want them to? We are only just starting to know what is possible with increasingly sophisticated analytics and, as Philip K. Dick (or Spielberg/Cruise, if you prefer) imagined in Minority Report, some uses might not be desirable or ethical. So what should we do?

First, the benefits. Giles Pavey offered the example of people suffering from coeliac disease wanting to buy bakewell tarts. There seems at first glance to be little benefit for small supermarkets to stock relatively low-selling, gluten-free products. However, as Giles explained, data from loyalty cards shows that these products draw customers in – they spend an additional £7 for every £1.50 they spend on gluten-free pastry. So everyone wins: the supermarket gets good custom and the customer is able to access the products they want.

Society beyond customers’ health and retailers’ wealth can also benefit from analysing consumer data. Paul Longley explained how data from household energy meters led to energy companies being fined for not insulating properties and failing to meet environmental targets.

Some of these data could be obtained from customer surveys, but let’s face it, many of us have don’t have the time or inclination to fill these in– this is how Paul Longley interprets falling response rates (PDF). Fortunately for retailers, but potentially worryingly for individuals, lots of us are also too busy (or simply don’t care) to read convoluted terms and conditions before sharing our personal data, eager as we are to access the latest ‘free-to-use’ apps and other services.

Yet despite this rather blasé attitude, Giles Pavey told us that the biggest concerns for consumers are identity theft and companies selling on their data, with government surveillance coming a distant third. So are these concerns justified? Are you taking an unreasonable risk by trusting other parties with your data?

One possibility to reduce this risk would be for consumers to be selective about what information they give. But where do you set the limit between data you want to make open and those you wish to keep private? Onora O’Neill and Jane Kaye pointed out that privacy is heavily context-dependent: your address is considered private in medical records, but is publically accessible in the electoral register.

So, is who you share your data with more important than what you share? Jane Kaye and Onora O’Neill suggested different approaches to deciding if you can trust those who handle your data.

For Jane Kaye, a contributor to the recent Nuffield Council on Bioethics report on the issue, one key aspect is that the public must be actively and continuously engaged in the governance of personal data usage. This can take the form of dynamic consent. While Onora O’Neill agreed that people ought to be well informed about how their data will be used, she also highlighted a sizeable problem with participation: “it takes too many evenings”.

Instead, Onora O’Neill’s core recommendation is for institutions to demonstrate their trustworthiness. Institutions should have rules and should communicate about:

1) who can access data

2) how data is used/linked to other datasets

3) data anonymisation.

She gave the example of the UK Biobank, where the process to get in is highly rigorous and therefore does not justify having an additional ‘re-consenting’ system, which would risk impeding the pace of research. Different institutions will operate differently: clinical researchers rely on consent and anonymisation, while geographers focus on the reuse of data on the basis of anonymisation rather than consent, as confirmed by Paul Longley.

Jane Kaye also added that a penalty system should be in place, to make sure data controllers handle data responsibly and penalise any misuse. Along these lines, the latest drafts of the EU General Data Protection Regulation include penalties of up to 100 million euros!

Generally, this event highlighted the tension between open data and privacy, a dilemma also stressed in a recent report by Chatham House and the Centre for International Governance Innovation. Most of the PolicyLab participants seemed happy about the prospects of open data. But they also want to know about and have a say in what becomes of their data. And data controllers should acknowledge that.

New EU and UK data protection regulations will be developed in the next two years. It is a thorny issue and much thinking still needs to be done. But one thing is certain: these laws will have substantial consequences on the future of both privacy and open science.

 

(*) Giles Pavey told us that future progress in image recognition algorithms may allow you to obtain the list of attendees from this photo of the audience

An audio recording of the PolicyLab can be found on the event’s webpage.