In November, the Royal Society brought together experts in industry, government and academia to discuss “Robotics and autonomous systems – Vision, challenges and actions“. In this blog, Professor Carole Mundell, an astrophysicist at the University of Bath whose team has developed intelligent systems to run on rapid, autonomous robotic telescopes, writes about the key messages from the conference.


Will a robot take your job?”, “Do we really need to fear AI?”, “Intelligent machines: will we accept the robot revolution?” – all potentially alarming headlines for a series of articles published by the BBC in October examining the status of robots in 21st century society.

But at the Royal Society on Friday 13 November, a very different vision of robotics and autonomous systems (RAS) emerged. Transforming our future: Robotics and autonomous systems - vision, challenges and actions

Far from taking people’s jobs, the first speaker of the day – Dr Bernard Charlès, President and CEO, Dassault Systèmes – argued that robotic, autonomous and highly connected systems should be seen as a means, not an end; a necessary way to address the complex future needs of society such as health, aging populations, energy, housing and transportation.

Indeed, as major new emergent and disruptive technology, RAS have been predicted by McKinsey to become a global trillion-dollar industry over the coming decade.

Dr Charlès argued that future innovation will come from a collaborative, multi-scale system approach to the ‘experience economy’ i.e. with value being in the experience a product provides rather than the product itself.

He described how a systems biology approach – or biomimicry – will provide a new revolution, akin to electrical and mechanical engineering and IT in the 20th century.

In particular, he cited the need for 40,000 new cities in the next 30 years as a driver for integrated autonomy to run so-called smart cities efficiently. From transportation and entertainment to power and waste management, autonomy and human-computer interaction are key on all scales.

Land, sea and air – autonomous vehicles can do it all

Remarkable progress has already been made in many fields. Professor Paul Newman (University of Oxford) took the audience on a whirl-wind, virtual journey in an autonomous, self-calibrating car that exploits sensor technology and probabilistic mathematical machine learning to navigate in real time.

Taking this technology literally into the field, autonomous vehicles being developed by Professor Peter Corke (Queensland’s University of Technology) could overcome the challenges of farming by day and by night, revolutionising global food production methods.

Professor Nick Roy (MIT) described the potential of machine learning and decision making under uncertainty for autonomous robots, such as unmanned aerial vehicles. He emphasised that the biggest challenge lies in the semantic gap between how robots and humans think – the human-computer interaction is fundamentally a communication problem.

He also highlighted the lack of legislative and policy frameworks around the liability of drone ownership and operation.

This seems a particularly urgent issue for manufacturers and law makers to address given the alarming numbers of drones lost per month; a problem that has led some drone-owners to create their own Facebook support group to help when their drones crash or fly away!

From the skies to the sea, Professor David Lane (Heriot-Watt University) showed examples of his team’s autonomous, self-learning undersea vehicles. These “remote operating vehicles” were prototyped in the lab then further developed in the unpredictable and hostile open water, such as inspecting deep sea drilling platforms.

Many of the robotic systems presented during this meeting required the integration of significant quantities of human knowledge and on-going, autonomous self-learning in order to develop the capability to operate in hostile, dangerous or humanly inaccessible environments. Adding to this the need for intelligent manipulation complicates the design and operation of these devices.

Perhaps most complex of all are robotic systems in medicine.

A sense of human-computer interaction

Sethu Vijayakumar demonstrates his autonomous prosthetic limb on fellow speaker, Nick Roy

Sethu Vijayakumar demonstrates his autonomous prosthetic limb on fellow speaker, Nick Roy

Professor Sethu Vijayakumar (University of Edinburgh) specialises in machine learning techniques in the real time control of large-degree-of-freedom anthropomorphic robotic systems for robotic arms and prosthetic hands – which he demonstrated by operating a prosthetic hand attached to fellow speaker Nick Roy!

As Professor Andrew Blake (The Alan Turing Institute) described, robotic vision developed using machine learning techniques is the most advanced “sense” available for RAS. However, the integration of other senses into RAS is also important. For example, Professor Katherine Kuchenbecker (University of Pennsylvania) argued that haptics (touch), may become essential for robotic surgery.

Designing robots to perform delicate tasks such as lifting an egg without crushing it or enabling a surgeon – and ultimately a fully robotic system – to operate on a human is a difficult research challenge. Her team has made the major breakthrough of identifying vibration (and its acoustic representation) as a key physical proxy for the sense of touch.

On the very smallest scales, Professor Bradley Nelson (ETH, Zurich) described his group’s work combining engineering, biology and medicine to develop robots for use inside the human body. His talk on micro- and nano-robotics connected beautifully with the biomimicry discussed by Bernard Charlès at the start of the day.

It is clear that our five senses combined with the non-linear complexity of the human mind remains a challenge to mimic in robotic form but advanced RAS are also helping us understand the limitations of manual approaches.

We have already sent robotic geologists to Mars and autonomous systems to the edges of the Solar System.

In turn, the laws of physics and biological systems on the smallest scales are enabling precision surgery and autonomous drug delivery in ways unimaginable in previous decades.

Combined with data-driven and probabilistic mathematics and machine learning, one hopes that the promise of future autonomous robots will be realised quickly for the benefit of mankind and our environment.

Videos of the presentations are available on our YouTube channel.