An Aeryon Scout drone which can be used for search and rescue activities.

An Aeryon Scout drone which can be used for search and rescue activities.

A couple of weeks ago, a roundtable organised by the Government Office for Science, the Royal Society, and the Arts and Humanities Research Council, brought together experts to discuss questions of ethics and trust in the human-machine relationship, here at the Royal Society.

Focusing on drones and autonomous systems, experts in machine learning, social science, public engagement, policy, ethics and the law, explored the implications of recent and future technological advances for policy and society.

Machine learning is a field of mathematics and computational science that studies software that can learn and adapt based on data without relying on a strict set of rules – a faculty that is reminiscent of our ability to learn as humans.

Machine learning can be used for a number of applications – e.g. it is already used for internet search engines and recommendations. It also has the potential to make unmanned vehicles, such as drones, increasingly autonomous.

The opportunities that machine learning presents for the economy and society were the focus of a recent Transforming our Future conference at the Royal Society.

The roundtable highlighted that drones and autonomous technologies are developing rapidly and it is not possible to predict all their uses. Examples given included driverless cars, using drones to deliver packages, using them in warfare or to find poachers in areas where the terrain is inaccessible or too vast to search in other ways.

There was also a discussion of the fact that such technologies have the potential to deliver great benefits for business and society, but could also be used in negative ways.

The experts present pointed out several possible issues. Drones as a whole new class of physical objects in our environment raise practical safety concerns. Drones may also present new ways for old issues of privacy to manifest, depending on how and by whom data is collected and used. In addition, it is unclear how much we trust autonomous systems – e.g. driverless cars – and how much we are willing to let them take control.

Finally, machine learning requires large volumes of data to train software, as well as computing resources, and some participants thought that at the moment only large organisations have access to such resources and this could lead to a further concentration of power in favour of big corporations.

So, how can we address issues while still achieving the opportunities that these technologies present?

To ensure equal access to resources, one could make it a requirement for academia and industry to contribute to an open data bank. Experts also talked about the possible role of regulations in controlling the development and use of drones and autonomous systems.

Regulatory frameworks vary from country to country and there was also concern that regulation might stifle innovation and prevent the UK from keeping its lead in these areas.

In fact, some participants thought regulation might struggle to keep up with the pace of technological development and many questioned whether it was even an option: while regulation might be useful to mitigate problems it might not be effective at preventing them from happening.

Several participants emphasised the need to further embed ethics as part of the education of scientists, to encourage self-governance. Some mentioned ethics boards being created in companies, as a form of industry self-regulation. In addition, government could use its procurement power to promote the most beneficial applications of autonomous systems.

In any case, the roundtable showed a need for policy to address issues and make sure the UK harnesses these powerful technologies.