AI and Diplomacy: Interview with Katharina Höne

 

Dr. Katharina Höne focuses on the interplay between technology and diplomacy as a researcher at DiploFoundation, an NGO working to promote improved global governance and the impact of small and developing states.

Harvard Political Review: You were one of the main contributors to the DiploFoundation’s report on artificial intelligence in diplomacy. What were the main takeaways of that report?

Katharina Höne: AI is an umbrella term, so it covers many types of technologies such as machine learning, but it’s also a moving target. When we talked about AI ten or twenty years ago, we were talking about something very different. As new innovations happen, the threshold of what we mean by AI is also shifting. It’s a field that’s in a lot of movement, and it’s very diverse, so it’s hard to generalize.

In the study, we looked at three things relating to AI and diplomacy. First, we looked at how artificial intelligence potentially shifts the environment in which diplomacy is practiced. In this area, we also have questions of lethal autonomous weapons and the dangers associated with that.

Second, we looked at AI as a tool for diplomatic practice. What AI applications are there that could support the work of diplomats? There are interesting examples in the area of natural language processing that could help diplomats prepare better for negotiations, for example.

Third, we’re looking at AI as a topic for diplomatic negotiations. Because it’s so pervasive and because it has such a great impact on our social, economic, and political lives, it will show up in international negotiations in all places: commerce, human rights, basically everything. That means diplomats now and diplomats of the future need to have avery good awareness of what AI is and how it impacts their area of focus.

Our key recommendation is that ministries of foreign affairs engage in capacity building so everyone has a basic understanding of AI. Second, there needs to be a exploration of the tools of artificial intelligence for diplomatic practice. Third, with new technology, there’s always the concern that existing inequalities between countries are exacerbated. When we talk about the internet, we talk about the digital divide. As countries start to develop the technology further, better, or faster, we need to ask about those countries that are potentially left behind.

HPR: How is AI impacting geopolitics and geoeconomics?

KH: Here I would definitely put the question of lethal autonomous weapons first. There are currently negotiations going on in Geneva on this. Some countries say that we cannot regulate lethal autonomous weapons right now because if we regulate this area, we’re going to hinder innovation. Other countries say the dangers of using that technology are so great that we have to do something now before we actually see it being used in the field.

There’s also the question of an arms race for AI. If you make a little bit of advancement in this tech, you’re going to make a huge leap — comparable to the Soviet Union launching Sputnik. Here, a technological advancement puts one country quite ahead of other countries.

HPR: How do you see the relationship between AI and human rights, especially in regards to questions of discrimination and discriminatory data?

KH: I think human rights apply online as they do offline, and the World Summit on the Information Society in the early 2000s has already been thinking about this. States that want to take a leading role in AI need to take into account the human rights dimension, with questions of freedom of expression, discrimination, and the rights to privacy, home, and correspondence.

Machine learning depends on vast amounts of data on individual people and collecting different data sources to get a fuller picture. On the positive side, if governments are doing that, they have a better idea of their citizens and their needs, which means their service provision can be a lot more efficient. At the same time, the same thing has a huge potential for abuse, and I think this is where the human rights question comes in and awareness of this comes in.

HPR: What do you think the international community’s response to that should be? Should it be more on the national level, or should there be more international cooperation to improve this?

KH: I think it’s both. We need an international conversation on this, and we need individual states applying this within their domestic jurisdictions. There are two conversations going on here: conversations on ethics and human rights. There’s some overlap there, but when we talk about human rights, we already have an agreed framework. Human rights are universally accepted, so we can talk about the United Nations Declaration of Human Rights and then move onto other conventions. I think this is the better avenue to pursue in the sense that we already have a body of rights that are generally accepted, and then the question is the new dangers we need to be aware of.

HPR: What are the main opportunities for cooperation on AI policy in general, and what are the main challenges?

KH: If you look at the national AI strategies that a lot of countries have been launching for the last two years, there’s a tension between competition on one side and cooperation on the other. Countries are working on their competitive advantage in terms of AI. Very often, the sense of competition is in relation to US and China, the two countries that are seen as leading in the technology and have the capacity to make quick developments on AI research and application. Other countries are always looking at these two countries as guideposts, something to compare themselves to and as something to work in relation to.

On the other hand, we find allusions to cooperation in these national AI strategies. There is this idea that we need to cooperate because we need to share resources, especially data, to work on AI and machine learning. This need for cooperation for smaller countries is functional, driven by the idea that individual countries don’t have the resources to really become a leader on AI.

We also need to speak about the questions of shared norms and ethics. For that, we need to cooperate.

 

This interview has been edited for clarity and length.

 

Image Source: Pixabay/Gordon Johnson

 

Leave a Comment

Solve : *
12 ⁄ 6 =