Skip to content

“The risks of using artificial intelligence in the military context are worrying”

The current coexistence between geopolitical tensions and the rise of emerging technologies makes this a historic moment in which the debate on security, even beyond our planet, becomes transcendental.

LOOK: How are the marine drones with which Ukraine is revolutionizing the future of naval wars?

Mallory Stewart, Assistant Secretary for Arms Control, Verification and Compliance (AVC) at the State Department of USAargues that the international security environment is increasingly challenging, while in the multilateral political environment treaties and other adopted architectures are abandoned to avoid inadvertent escalation, misunderstandings and miscalculations with unfortunate consequences.

After visiting Brazil, the official arrived in Lima this week, where she met with Peruvian authorities and participated in a roundtable discussion with El Comercio and two other local media on space security and artificial intelligence.

Space exploration is entering a new era. Although it does not have a large space program, how could Peru, or other Latin American countries, benefit from US efforts in that field and what are the next steps? How can we integrate the region?

I think it’s a very exciting time. A lot of what we’ve been doing in the US government is trying to pick up the lessons learned. Our efforts in space and emerging technology are sometimes moving forward as quickly as possible, but we don’t always sit back and wonder what we could have done better. I think space is a good example of how we have realized, after several decades of using space, the challenge that space debris poses. And while we have created waste in the past, we have now realized how difficult it is to manage. So, from our point of view, it is better to work with the countries that are developing and entering the space field more fully. Now we try to help Peru understand what are the issues and challenges they are facing, but we also want to share the lessons learned so that we know that it is important to commit to not creating more debris that makes space operations more complicated. Right now we are entering this really exciting time where our expanded cooperation is growing in a fabulous way.

There are thousands of tons of space junk orbiting Earth, according to NASA. (Photo: AFP/ESA/HO)

How do you evaluate the participation of Peru?

It is a positive moment to start saying that we deeply appreciate Peru’s commitment to responsible behavior that we saw at the UN General Assembly in 2022, when a resolution was passed to prevent more space junk from being produced. We must work on what we can do to prevent any kind of challenge to our collective and comprehensive use of space in the future. The approach based on lessons learned and learning from Peru about the challenges we face is really exciting.

How can Artificial Intelligence (AI) be used to explain space?

I believe that we must try to approach the space environment from the perspective of its usefulness for all countries and the sustainability of the exciting potential it offers us now, but also for future generations. Therefore, we must use our ability to work together and explore new accesses in places that need more representation in the space community. This will help us both expand the ability to understand where we need to go, what we need to achieve, and perhaps how we can best use it as a global community. One of the challenges that space operators have faced is the limited community to which they belong and I believe that this community needs to be expanded to see how new space players, such as Peru and other countries in the region, can contribute their experience and knowledge to achieve greater space exploration capabilities. This is not the time to limit it to a conversation between a few countries, but to really expand it to what it has to be, which is a global conversation.

How advanced is the military use of AI? Do you see significant risks? Maybe something involving Latin America?

We in the US are still trying to understand how advanced our own use of AI is in our military, but also where the good uses and potential benefits are, which are already quite broad. It amazes me how pervasive the use of AI is in our iPhones, in our computers, even in the operation of mundane kitchen appliances. In the US, AI has been used in our Army. Our concern from the Office of Arms Control is especially focused on the risks that the use of AI could have in the military context if there is not adequate training, an adequate understanding of the potential risks and the challenges that inadvertent bias could bring to an artificial intelligence system, especially the risks of not including a human in making crucial strategic decisions.

Artificial intelligence has enormous potential to change all aspects of life, but it comes with risks.

Artificial intelligence has enormous potential to change all aspects of life, but it comes with risks.

Our Department of Defense has extensive oversight and ethical management requirements that they have already put in place and which are largely reflected in the military statement on the use of AI in military space that our office Assistant Secretary, Bonnie Jenkins, introduced in February 2023. That statement represents what our Army has learned from some of its efforts to use AI responsibly. But the declaration is evolving and we are talking with the Peruvian Government and other governments in the region to hear how the utility of AI can be maximized in a positive way and how we can use these types of commitments and regulations to prevent the risks that AI you can bring.

How can these risks be prevented?

We have returned to a “lessons learned” analysis. We have seen the incorporation of bias and human error slip through our fingers in previous AI systems. It’s something we discovered in retrospect and that we want to try to avoid as AI is used more and more in the future. So, in order to guarantee a suitable training context for the understanding of this technology, it must be avoided that in the military context an error can continue without human knowledge and guarantee that there is always a human being in the loop in making strategic decisions. , which is one of the most important things from a gun control perspective. There is never a decision made regarding strategic weapons that does not include human participation, that does not include the oversight, knowledge and extreme concern and insight of a human.

What worries the United States the most about security in Latin America?

It is not necessarily my area of ​​specialization, but I can say that it is not about Latin America specifically. I would say that as we see emerging technologies and emerging security domains expand and become extraordinarily complicated, we have security concerns for ourselves, but also for our partners. And awareness is a big part of what we do at the Office of Arms Control to try to make everyone aware of the potential risks, so that we can work together globally to not only deter those risks, but also to try to prevent them through supervision and regulation and responsible behaviours.

Regarding the relationship between AI and cybersecurity, we do not have to have a large budget to generate risks and it can be done from any country, including Peru. How do you see this scenario?

It is a very important and very complicated conversation. I think the ability of AI to exacerbate some of the cybersecurity challenges, to find things that even a human investigator couldn’t find in terms of holes in security workplace protections, is something that keeps us all up at night. It’s something you see potentially in Hollywood movies, but it’s a real concern and I think it’s one of the reasons why we really need responsible behaviors in appreciating how AI can become a part of everything we do. Many people, offices and agencies are studying it in the US and I know that it is something that the Peruvian Government is also actively studying.

The Office of Cybersecurity has just been created in the Department of State. In the US we have a senior adviser to the Secretary of State on security and technology matters and on the White House National Security Council. We have also developed additional skills and knowledge about how the security challenge is going to affect us all. But it is also about the responsible use of AI. We have to be able to understand how countries can regulate the contexts, knowledge and real uses of AI could help prevent its harmful use in the cybersecurity scenario. It’s a really complicated question, but it’s a good question, and I’m glad we’re all thinking about it, because cybersecurity is an area that affects us all and it doesn’t require a lot of resources to put to really nefarious use of it. . Being responsible, becoming aware of the challenges it poses and trying to work together to prevent its risks and its effectiveness is something that we should all work on.

Source: Elcomercio

Share this article:
globalhappenings news.jpg
most popular