Skip to main content

Global Leaders Convene at UMD to Advance Responsible AI in the Military Domain

Back to All News
Picture of Representatives at AI Plenary

Over March 19-20, 2024, 161 representatives from 60 countries came to the School of Public Policy’s Thurgood Marshall Hall (in person or on-line) to discuss how to responsibly integrate AI and autonomy into the military domain. The inaugural plenary session of countries that have endorsed the Political Declaration on the Responsible Military Use of Artificial Intelligence (AI) and Autonomy was organized by the U.S. Department of State along with the Department of Defense and co-hosted by the Center for International and Security Studies at Maryland (CISSM).

AI and autonomy are being integrated into the military domain at an impressive pace; opportunities and dangers are manifold. Ukraine and Russia are already employing AI for battlefield data analysis to process the vast amounts of information the war is generating. While humans are in the decision loop, there is a lack of an international consensus on the appropriate level of human oversight needed. Indeed, the world sits at an inflection point as states are rapidly adopting algorithms to enhance decision-making speed and threat recognition, signaling a major shift in global security dynamics.

The Declaration is a normative framework comprised of ten guiding principles and seven other commitments announced by Undersecretary of State Bonnie Jenkins at the Responsible AI in the Military Domain (REAIM) summit in February 2023. It is part of a process intended to build international consensus on best practices and guide states’ development, deployment and use of these technologies in the military domain while ensuring human accountability, compliance with international humanitarian law and careful consideration of benefits versus risks. To further this process, the United States is convening an ongoing dialogue among endorsing states. Fifty-four states have endorsed the Declaration so far, including one who did so during the plenary. 

Reflecting the close cooperation on this initiative between diplomats and military officials in the United States, Assistant Secretary of State for Arms Control, Deterrence and Stability Mallory Stewart gave keynote remarks on the first day, while Madeline Mortelmans, acting assistant secretary of defense for strategy, plans and capabilities, gave the second day’s keynote. Numerous states, including the Republic of Korea, Ukraine and the Netherlands gave briefings about how their governments are developing policies and practices to implement the guiding principles. Looking ahead, co-chairs of the three working groups on accountability, oversight and assurance laid out their plans for the upcoming year and encouraged all interested endorsing states to participate in their activities.

Picture of UMD Faculty and Students with Mallory Stewart
CISSM Director Nancy Gallagher, Research Scientist at ARLIS Paul Lapota, SPP Ph.D. student Samuel Hickey and SPP MPP student Adam Abdel-Qader pictured with Assistant Secretary of State for Arms Control, Deterrence and Stability Mallory Stewart.

One of the recurring themes of the plenary was the challenge of mitigating unintended bias. Bias in data is, to a degree, unavoidable. However, the consequences of bias in the data are amplified in a military setting. Not only might the bias lead to discrimination based on gender, race or religion, for example, but it might also misidentify noncombatants with deadly consequences. 

Representatives from DOD’s Chief Digital and AI Office (CDAO) gave a presentation on the Responsible AI (RAI) Toolkit they made publicly available in November 2023 to help users capitalize on innovation in ways that align with RMAI best practices.  When the Toolkit was released, Undersecretary of Defense Kathleen Hicks (MPM ’93) declared that “It is imperative that we establish a trusted [RAI] ecosystem that not only enhances our military capabilities, but also builds confidence with end-users, warfighters, and the American public…”

During the first day of the plenary, CISSM Director Dr. Nancy Gallagher led a discussion with Congressman Ted Lieu (CA-36), co-chair of a newly formed bipartisan task force exploring how Congress can promote American leadership in AI innovation and provide guardrails to safeguard against deliberate or inadvertent misuse.  Representative Lieu acknowledged that “algorithms are not designed to seek the truth. They are designed to return the most popular response.” Interpreting the outputs of these algorithms can be challenging because the most advanced AI models represent a black box even for the developers and the outcomes can be further corrupted by poor, incomplete or biased data. Without the ability to trace the decision-making process, it will be challenging to have a high degree of trust in algorithm outcomes.

To address these challenges, states must engage with industry, civil society and academia to access creative solutions and ensure there are no blind spots.  The Declaration focuses on governmental responsibilities, but the State Department chose to hold the first plenary for the implementation process at the University of Maryland in part to symbolize the importance of cross-sector partnerships. In her opening remarks, Assistant Secretary Stewart said, “We thought it fitting to launch this implementation process at the University of Maryland – an institution itself engaged in cutting-edge technical work on AI – rather than a more traditional diplomatic setting.”

In that vein, Dr. Paul Lopata, a visiting Research Scientist at UMD’s Applied Research Laboratory for Intelligence and Security (ARLIS), explained how the University has positioned itself at the intersection of technology innovation and ethical values. He overviewed some of the cutting-edge AI research occurring at the University of Maryland Institute for Advanced Computer Studies (UMIACS) and elsewhere on campus, plus some of the applications being developed for government clients by ARLIS, including a field experiment involving multiple intelligent agents that can communicate information and collaborate with each other. Lopata also shared some insights about the challenges of governing rapidly advancing dual-use technologies from his previous role as Principal Director for Quantum Science in the Office of the Under Secretary of Defense for Research and Engineering. 

“It was an honor to host leaders from around the world committed to the responsible use of AI in the military domain and support an international process grappling with complex technological and societal issues of profound consequence”, said Robert Orr, Dean of the School of Public Policy. “Good policymaking requires expertise and partners, convening and collaboration, and this plenary co-hosted by CISSM demonstrates the important role we at the School and University can play in the policy issues that matter.”


For Media Inquiries:
Megan Campbell
Senior Director of Strategic Communications
For More from the School of Public Policy:
Sign up for SPP News