As AI technologies advance at pace, governments across the world are developing national strategies to harness their economic benefits for national advantage and societal well-being.

The potential of AI to improve the lives and livelihoods of people across the world is significant, from helping address climate change and food security, to supporting elderly care and healthcare delivery. However, these technologies also have the potential to reinforce social divisions and inequalities, with implications for those already at the margins of society.

Advances in science have long relied on international flows of people and ideas. At a time when questions about how the benefits of technological progress are shared across society are at the fore of public and policy debate, maintaining strong international collaborations can play an important role in connecting communities and ensuring that technology advances in ways that benefit all in society.

In this context, earlier this year the Royal Society and US National Academy of Sciences convened ‘An International Dialogue on AI’ to explore areas where international collaboration could help advance AI research and policy.

AI research and policy today covers a huge range of issues, including:

  • the politics of AI technology (addressing concerns about bias and fairness; building safe and secure systems; ensuring explainability and transparency);
  • domestic policy (building skills; data governance; support for research);
  • international political economy (competition policy; global trade and data flows; international security); as well as
  • a variety of application-specific challenges (data access; standardisation; regulation).

So where can international collaboration help?

Better understanding the impact of AI on work and the economy

The interactions between AI and the global economy are complex. AI adoption is occurring against a backdrop of variable productivity growth and increasing concerns about the impact of rising income inequality on social cohesion.

Sensational early estimates of the impact of AI on employment put AI-enabled automation centre-stage in many public and policy debates about its economic effects. But technology is not a unique or overwhelming force: its impact on employment is influenced by social, political and economic factors, which will vary across nations. The way in which AI effects employment is contingent on factors including: the relationships between workers and employers; demand for AI-produced goods and services from businesses and consumers; regulatory frameworks; business incentives to adopt AI; the skills mix in the economy; whether complementary investments – for example in infrastructure – are necessary to implement AI-enabled solutions; and barriers to market entry for new firms.

Predicting the impact of AI on work – and who may be at risk of being economically disadvantaged by the widespread adoption of AI – is therefore challenging.

Understanding the dynamics of this new economy is made more complex by the difficulties of measuring its activity. Without understanding how data flows across borders, how value is created by platforms, how work is changing, how markets are structured and new business models develop, or how to value data, it is difficult to develop effective policy responses. In the first instance, there is therefore a need to understand how the digital economy can be measured, and how existing policy structures – such as trade rules from the WTO – might apply to AI-enabled products and services.

Advancing research and policy to create trustworthy AI systems

Hype surrounding the potential benefits of AI is increasingly tempered by growing understanding of the risks associated with AI technologies, often as a result of their disparate societal impact. Recent years have seen a variety of news stories about mis-use of personal data, biases in AI systems, and issues when people and AI interact.

Many of these stories illustrate the potential of AI to reinforce existing social divisions or biases, with consequences for equality and social cohesion. The benefits and risks associated with these systems are unevenly distributed across society, with vulnerable communities potentially being further marginalised as a result of their deployment. These issues come to the fore as AI systems are used in policy areas where complex social and political forces are at work.

Changing technological capabilities and patterns of technology use are also challenging current notions of privacy. Personal data is collected in new and potentially unexpected ways; machine learning and advanced analytics can re-identify individuals in datasets previously considered to be anonymised; and algorithmic tools can use data from different sources to target advertising or services in sophisticated ways that raise concerns about profiling.

Questions about the explainability of AI systems have also come to the fore. Once trained, some of the most sophisticated AI systems – notably those based on deep learning – are ‘black boxes’ whose methods are accurate, but difficult to interpret. Although these can produce statistically reliable results, it might not be clear how or why these results have been generated. Where the decisions informed by these technologies have a significant impact – personally or socially – accuracy is unlikely to be sufficient to secure public confidence in their use or support effective structures for accountability.

Such design concerns will become increasingly important as AI is deployed in organisations or contexts that are far-removed from the technical expertise that created the system. This context influences not only the ways in which AI development is shaped by existing social biases, but also the ways in which humans and AI systems interact.

In seeking to resolve these issues, both technology-enabled and human-led solutions can play a role, and a range of research and policy responses are emerging. Advancing these ideas requires a sophisticated public and policy dialogue, bringing to the table a wide range of expertise to fully examine the impacts of widespread AI adoption.

Beyond these local solutions, there is also a case for pursuing the development of AI as a form of civic architecture, which would work to bolster democracy and civic engagement. This type of institutional approach would ensure that AI: is open and operates in the service of all in society; works to explicit, transparent rules and roles; enables institutional memory and learning; and works to established processes of oversight and accountability.

Supporting applications of AI for social good

The application of AI to a range of public policy challenges could bring great benefits for all in society.

  • Healthcare applications of AI are already emerging, with AI-enabled tools enhancing diagnosis or improving monitoring for conditions including dementia, cancer and eye disease.
  • Schools can use AI to help improve teaching and learning, by providing tailored learning plans and feedback, by providing new types of course, or helping bridge language barriers.
  • AI can help develop new tools to tackle sustainability issues, including monitoring tools to help track endangered species in the wild.

Many of these challenges would benefit from international collaboration, bringing together diverse expertise and different datasets to generate new insights and create tools that work for a range of users. In natural language processing, for example, multilingual datasets are vital in creating systems that can work for users across geographies.

In order to advance applications in these areas, action is needed to create secure shared data resources and frameworks for research collaborations. Access to international datasets in these areas of interest and the development of internationally-agreed standards in relevant application areas could further advance this cause.

Enabling international collaborations and dialogues

High-quality AI research can be found across the globe, in both academia and in industry, and there is a strong culture of international mobility amongst AI researchers. These ways of working could lay the foundations for strong international collaborations on projects that advance the use of AI for social good.

History provides examples of how scientists can collaborate across borders in order to influence the path along which technologies progress. The Pugwash conferences on the development of nuclear technologies, for example, provided opportunities for scientists from both sides of the Iron Curtain to communicate with each other, sharing technical knowledge and insights into emerging applications. Scientists could collaborate, even in the midst of intense political conflict between states. Insights generated through the meetings have informed a variety of international agreements.

There is now widespread consensus that action is needed to create an environment of careful stewardship of AI technologies, to ensure that their benefits are brought into being safely and rapidly, and that these benefits are shared across society. Levers for action to promote the safe and rapid use of AI technologies can operate over multiple levels – from the research culture that shapes the priorities of AI technologists and how they work, to national policies that advance the deployment of AI, and international agreements that set standards for its use.

In the coming years researchers and policymakers will need to:

  • Prioritise: identify areas of pressing need or significant benefit, and work to create solutions to these needs.
  • Specialise: advance policy debates by focussing on specific use-cases, in order to move beyond high-level statements and develop application-specific responses.
  • Mobilise: support mechanisms that advance international cooperation in key areas of interest.

Progress will require engagement across companies, investors, governments, researchers, and publics to create a vision for the development of AI that benefits society, and to advance collective action across public and private sectors – and international boundaries – in order to bring this into being. By defining global challenges against which AI could be deployed, designing incentives and opportunities for progress in those areas, and developing structures for dialogue across countries and research communities, businesses, governments and researchers can shape the development of AI for societal benefit.

 

You can read more about these areas and discussions at the ‘AI: an international dialogue’ workshop on our webpage (PDF).