One of the more unusual images in the Royal Society’s archives depicts two human musicians and a gold duck. This tableau represents three automata, or self-operating machines, created by Jacques de Vaucanson in the eighteenth century.
Born in Grenoble in 1709, from an early age Vaucanson was preoccupied with mechanical systems, devoting much of his attention to clocks and models. As a novice monk, he began experimenting with automata – not always with positive results. After an incident involving rudimentary systems to automate dinner service, a visitor to his Order complained about his ‘profane’ creations, and he was ordered to cease his experiments.
Of course, he did not, and we come to the flute-player, the drummer, and the defecating duck.
On 11 February 1738, Vaucanson revealed a series of mechanical creations at a public exhibition: two wooden figures, one playing the flute and the other playing the pipe and tabor, and a gold-plated duck. The flute-player, painted white to mimic a marble statue, contained a system of pipes and bellows that enabled it to play 12 melodies. The duck, meanwhile, was believed to contain a functioning intestine, via which it ‘swallowed corn and grain and, after a pregnant pause, relieved itself of an authentic-looking burden.’ This apparently functioning intestine was later discovered to be a fake, however: Vaucanson had created a separate system for the duck to excrete breadcrumbs that had been dyed green.
The exhibit became the subject of significant public interest – so much so that Voltaire described Vaucanson as ‘a rival to Prometheus, [he] seemed to steal the heavenly fires in his search to give life’. It also fed into a broader pattern of eighteenth-century scientific exhibits that sought to advance, and increase the popularity of, science by linking it to entertainment.
These public depictions of autonomous machines – and the hopes and fears that follow – form part of a long history of debates about artificial intelligence (AI). Today, recent advances in technologies such as machine learning have re-invigorated a widespread discussion about AI and its consequences for society. This cultural hinterland – of which Vaucanson is a part – continues to shape how technologies are portrayed in media, culture and everyday discussion; it influences what societies find concerning or exciting about technological developments; and it affects how different publics relate to AI technologies.
Public awareness of the technologies driving recent advances in AI is low. Extensive public dialogues carried out by the Royal Society showed that only 9% of those surveyed had heard the term ‘machine learning’. From these dialogues emerged a number of types of hope or fear about AI. While participants could imagine AI systems being more objective or accurate than humans, they also expressed concerns about potential harms that these might cause or how humans might be replaced by machines.
In the absence of widespread public awareness of AI technologies, most people’s views about AI will be shaped by ideas that are pervasive in public consciousness. A series of workshops by the Royal Society and Leverhulme Centre for the Future of Intelligence has been exploring how AI is portrayed in public debates, and the implications of these portrayals. In examining the narratives prevalent today, key themes that emerge include:
- A strong tendency for fictional narratives to anthropomorphise AI, or present it in human form;
- A prevalence of narratives portraying extremes, with exaggerated hopes for intelligent machines or dystopian futures in which humans are obsolete or under attack; and
- Issues of representation, with depictions of AI perpetuating potentially harmful societal stereotypes.
These stories can influence the policy environment that surrounds AI. They can direct the attention of the public, policymakers, and researchers to (or away from) particular areas of opportunity or concern, and can influence how societies respond to technological advances. Potentially significant policy consequences for AI research, funding, regulation and reception could follow.
Building a well-founded public dialogue will be key to continued public confidence in the systems that deploy AI technologies, and to realising the benefits they promise for society and the economy. Since the launch of the machine learning project, the Royal Society has been creating spaces for public discussion about AI technologies and their implications for society.
The AI narratives project is the latest step in this programme. A write-up of the project, published today, poses questions for researchers and communicators to consider when engaging in debates about AI, such as:
- How can stories be created that generate engagement without contributing to hype?
- How can a wider range of voices be brought into public discussions about AI?
- How can researchers be supported to engage in public debates about AI technologies, and how can public dialogues support new ways of thinking and talking about AI?
As AI technologies are put to use in a growing range of contexts or applications, continuing engagement between researchers, policymakers, and the public will be important in helping to ensure that the benefits of AI are shared across society.
Jessica Montgomery is a senior policy adviser at the Royal Society, where she leads programmes on AI and machine learning.