Modular Framework for AI Governance at the UN

KEY Notes

  • The Urgency to Regulate: AI’s integration into defense and security systems poses unprecedented risks, from autonomous weapons to algorithmic escalation, demanding urgent international governance to ensure responsible and ethical deployment.

  • The Governance Challenge: Global regulation faces deep obstacles: geopolitical rivalry, national sovereignty, lack of enforcement tools, and rapid technological change all hinder consensus and effective oversight.

  • The Modular Solution: This proposal introduces a UN-led modular governance framework—combining global norms, regional implementation hubs, and enforcement tools such as audits and confidence-building measures—to close regulatory gaps while maintaining flexibility and scalability.

Abstract

The rapid militarization of artificial intelligence (AI) presents unprecedented risks to global security, yet existing governance models remain fragmented and lack enforceable oversight mechanisms. This research introduces a modular AI governance framework under the United Nations, designed to mitigate AI-related security threats by integrating confidence-building measures (CBMs), global audit mechanisms, and decentralized regional hubs.

Inspired by the International Atomic Energy Agency (IAEA) but adapted for AI’s unique challenges, the framework proposes an International AI Agency (IAIA) to oversee AI governance, with a global AI audit system, adversarial red-teaming mechanisms, and compliance verification tools to ensure adherence to AI non-weaponization commitments. The model also establishes binding CBMs for military AI, drawing from nuclear and cyber arms control agreements, to enhance transparency in lethal autonomous weapons systems (LAWS), AI-enabled command-and-control infrastructures, and AI-assisted strategic decision-making.

To address geopolitical resistance and the UN’s structural limitations, the framework adopts a multi-tiered approach, where regional AI governance hubs (e.g., EU, ASEAN, AU) operationalize UN-led AI security guidelines, ensuring localized enforcement and accountability without imposing a rigid framework. This decentralized structure balances global coordination with national interests, making international AI governance politically feasible, compliant with current International Law, and adaptable to technological advancements. The framework explicitly mandates Human-in-the-Loop safeguards, ensuring meaningful human oversight over autonomous AI-driven military decisions to prevent algorithmic escalation and unintended conflicts.

By embedding AI stability mechanisms within the UN system, this solution provides a scalable and politically viable approach to regulating AI and its role in security and defense. This governance model not only strengthens global security and crisis prevention but also fosters responsible AI development, ensuring that AI’s integration into military and defense sectors remains ethically grounded, legally accountable, and strategically stable.

Keywords: AI Governance, security, defense, Autonomous Weapons Systems, Human-in-the-Loop

Introduction

Artificial Intelligence (AI) has rapidly emerged as a transformative force across multiple sectors, reshaping economies, societies, and governance systems. While its applications in areas like healthcare, finance, and education have driven innovation and efficiency, the integration of AI into security and defense presents a unique set of ethical, legal, and geopolitical challenges. From lethal autonomous weapons systems (LAWS) and AI-assisted command-and-control platforms to real-time threat detection and decision-making, the militarization of AI has introduced risks that extend far beyond the battlefield.

As states race to develop AI capabilities for strategic advantage, concerns over algorithmic escalation, loss of human oversight, and accountability gaps are becoming central to international security discourse. Unlike traditional weapons systems, AI technologies are less constrained by physical or geographic boundaries, complicating efforts to monitor, verify, and control their use. Moreover, the absence of comprehensive international norms governing military AI, combined with a fragmented regulatory landscape, heightens the risk of miscalculation, unintended conflict, and erosion of humanitarian protections in warfare.

The race among states, the proliferation of algorithms, and AI’s growing integration into security and defense make it more critical than ever to establish regulation over this rapidly advancing technology. While various national initiatives have emerged, the absence of a unified, transnational standard highlights the urgent need for coordinated international governance. The United Nations (UN) and its institutional system could serve as a strong foundation for such a framework. This would not be the first time the UN plays a central role in regulating a critical domain, as demonstrated by the creation of the International Atomic Energy Agency (IAEA).

Therefore, it is worth examining the role the UN could play in establishing an international AI governance framework that ensures security, stability, and compliance in military and defense applications. At the same time, the current international landscape, the geopolitical AI race, and the structural limitations of the UN itself may pose significant challenges to the creation of such an agency. It is thus imperative to first understand AI’s role in defense and security, and to assess the regulatory obstacles it presents, in order to determine how the UN might effectively maneuver through this pivotal moment in global governance.

AI Regulation and Big Data

By examining the legal and ethical challenges of Big Data, particularly its use in law enforcement, we begin to understand the stakes and sensitivities involved in regulating artificial intelligence, whose capabilities are far more advanced, opaque, and impactful. In today's digital era, approximately half of the global population engages with online services, leading to an unprecedented generation of data from diverse sources [1]. Big Data, a term that may be somewhat of a misnomer, represents this new era of massive, complex, and rapidly changing datasets that surpass the capabilities of traditional data management tools. These data sources include emails, social media, healthcare records, and operational data from sensors and satellites. Early adopters of Big Data technology include major enterprises like Facebook, Google, and Yahoo, which turned simple activities into rich data generation events, characterized by high volume, velocity, and variety [1]. AI algorithms, especially those based on machine learning and deep learning, require large amounts of data for training. Big Data provides this essential fuel. The diverse, voluminous datasets help in training AI models to recognize patterns, make predictions, and improve decision-making accuracy over time. The variety and volume of Big Data enable AI systems to learn from a broader range of examples, improving their ability to generalize and function in diverse scenarios. Moreover, AI algorithms can process and analyze these large datasets more efficiently, identifying trends, anomalies, and patterns that might be invisible to human analysts. Consequently, Big Data provides the historical data necessary for predictive AI models, allowing for more accurate and informed correlations, generations, and forecasts. The legal challenges of Big Data in law enforcement primarily hinge on theoretical aspects of data protection, human rights, and the balance between public security and individual privacy [1]. The theory underlying these challenges involves navigating the complex interplay between the immense capabilities of Big Data analytics and the legal frameworks designed to protect individual rights [1]. It is safe to argue that those legal challenges are transferred to AI. From a theoretical perspective, the use of Big Data in law enforcement is seen as a double-edged sword. On one hand, it offers unprecedented opportunities for predictive analytics, which can significantly enhance public safety by preventing crime and aiding in criminal investigations. On the other hand, it raises substantial concerns about privacy, data protection, and potential misuse of information, especially given the scale and depth of personal data involved [1].

The legal theory underlying these challenges revolves around principles established in various human rights doctrines and data protection regulations. These include the need for clarity, precision, necessity, and proportionality in any use of Big Data by law enforcement agencies [1]. A key theoretical challenge is how to balance the benefits of Big Data in law enforcement with the potential risks to individual rights. This involves considering the principles of data minimization, purpose limitation, and ensuring accuracy in data analytics to prevent discriminatory outcomes. Moreover, the theory often grapples with the automation aspect of Big Data analytics, which, while efficient, might limit human discretion and oversight, raising concerns about the potential for systemic biases and the erosion of individual privacy [1].

Consequently, many of these concepts are taken into consideration when examining AI’s ethical challenges. First, accountability in AI involves ensuring that AI systems function appropriately across all stages, including design, creation, testing, and deployment, in line with regulatory frameworks [2]. This necessitates that developers maintain a comprehensive record of the AI development process, underscoring their responsibility for the systems’ lifecycle. Transparency, on the other hand, pertains to the extent to which end-users can comprehend AI system operations. It includes understanding the functioning of AI (“simulatability”), the workings of its individual components (“decomposability”), and the visibility of algorithms. In addition to that, Explainability and Interpretability are crucial for enhancing AI transparency, trustworthiness, and accountability, potentially reducing bias and unfairness [2]. Explainable AI elucidates how AI systems arrive at decisions, thereby fostering consumer trust and aiding in the development of new models through reproducibility and checks and balances. Interpretability links the accuracy of machine learning programs in correlating causes and effects. Fairness and inclusiveness in AI address the correction of algorithmic biases that might arise from developers' subconscious preferences. The reliance of AI systems on potentially biased data underscores the importance of unbiased data sources and diverse workforce involvement in minimizing such biases [2]. Consumer concerns about privacy and safety are paramount in AI-enabled products. The responsibility lies with technology companies to protect consumer data and inform users about data collection and usage policies. Finally, the security and robustness of AI systems are essential, given their vulnerability to cyberattacks. AI developers are expected to ensure continuous monitoring, robustness against data corruption and hacking, and invest in security technologies like encryption and firewalls to safeguard against cyber threats. Thus, regular testing is imperative to enhance data protection and system security.

Over time, these concerns pushed stakeholders to take actions and mount responses, which reflects the complexity of AI’s ethical standing. Thanks to the vast applications of AI and to its effects over multiple sectors, AI governance and regulations became inevitable. There seems to be a necessity to address AI’s ethical questions and create frameworks and guidelines. Therefore, AI governance would be an overarching term that encapsulates the comprehensive set of policies, ethical guidelines, best practices, and frameworks that inform and direct the development, deployment, and utilization of AI technologies. This multifaceted approach to governance would transcend mere compliance with legal mandates, incorporating a broader spectrum of activities and principles. It would entail the establishment of ethical norms, the formulation of industry standards, and the promotion of self-regulation among organizations and professional entities. AI governance would be characterized by its focus on creating an inclusive and holistic framework aimed at harnessing AI for societal benefit while ensuring responsible and ethical usage. Conversely, AI regulation would refer to the formal legal and regulatory frameworks enacted by governmental and regulatory authorities. As a subset of governance, it would be more narrowly focused on delineating the legal parameters within which AI systems would be permitted to operate. This regulatory landscape is typically composed of statutes, formal rules, compliance requirements, and mechanisms for enforcement. The primary objective of AI regulation would be to safeguard public interest, ensuring that AI applications do not infringe upon individual rights, public safety, and ethical standards.

In essence, while AI governance would represent a broad and inclusive approach to managing AI, encompassing various forms of guidance, oversight, and self-regulation, AI regulation would pertain specifically to the legal and statutory frameworks established by governing bodies. A comprehensive approach to AI governance invariably entails a harmonious integration of both regulatory measures and broader governance strategies. Yet the stakes, and the consequences, become far more urgent when these ethical and regulatory gaps unfold not in civilian sectors, but within the highly sensitive domains of security and defense.

 

The Security and Defense Implications of AI

The integration of AI into military and security operations has raised profound ethical, legal, and strategic concerns [3]. AI-driven technologies, ranging from autonomous weapons systems (AWS) to AI-enhanced command-and-control platforms, have the potential to reshape modern warfare. While AI offers significant advantages in terms of precision targeting, surveillance, threat detection, and decision support, it also introduces new risks of miscalculation, algorithmic bias, and unintended escalation. The increasing reliance on AI in military applications necessitates a robust regulatory framework that balances innovation with security imperatives and international stability.

One of the most contentious aspects of AI in defense is the development and deployment of lethal autonomous weapons systems (LAWS) [4]. These systems, capable of selecting and engaging targets without human intervention, challenge traditional principles of International Humanitarian Law (IHL), including proportionality, distinction, and human accountability in warfare [5]. While proponents argue that AI can enhance targeting accuracy and reduce collateral damage, critics warn that delegating life-and-death decisions to machines poses unacceptable ethical risks [6]. Furthermore, algorithmic biases in AI-driven targeting systems could lead to disproportionate impacts on civilian populations, exacerbating the humanitarian costs of warfare. The lack of transparency in AI decision-making also raises concerns about attribution and accountability, as states and non-state actors could plausibly deny responsibility for AI-induced military actions [7].

Beyond autonomous weapons, AI is increasingly being integrated into command-and-control systems, allowing for real-time data analysis, predictive modeling, and strategic decision-making at an unprecedented scale [6]. AI-enhanced battlefield management systems can process vast amounts of sensor data, optimize troop movements, and even simulate adversarial responses. However, this increased reliance on AI raises concerns about automation bias, where human operators become overly dependent on algorithmic recommendations, potentially leading to catastrophic errors. Additionally, adversaries may exploit AI vulnerabilities through data poisoning, adversarial attacks, or hacking, undermining the reliability of AI-driven military strategies.

 

Another critical concern is the risk of algorithmic escalation, where AI-enabled decision-making processes accelerate conflicts beyond human control. Unlike human actors who may exercise restraint, AI systems operate at speeds that could outpace diplomatic interventions, increasing the likelihood of rapid and unintended military escalation. This is particularly concerning in nuclear deterrence strategies, where AI-enhanced threat assessment models could misinterpret signals, triggering escalatory responses that were never intended [8]. As AI becomes more embedded in military operations, ensuring meaningful human oversight over AI-driven strategic decisions is paramount to preventing unintended conflicts.

The geopolitical implications of AI in security and defense extend beyond individual nation-states. The AI arms race among global powers, including the United States, China, and Russia, has heightened concerns over AI-enabled asymmetric warfare, cyber operations, and geopolitical destabilization. Unlike traditional arms races that involve physical stockpiling, the AI arms race is characterized by advancements in algorithmic warfare, cyber AI capabilities, and intelligence superiority [9]. The absence of clear international norms governing the use of AI in military applications increases the risk of strategic ambiguity and miscalculations, making AI governance in defense an urgent international priority.

In response to these challenges, there is an increasing need for binding international agreements on AI’s role in warfare, ensuring that autonomous systems remain under meaningful human control. Several diplomatic initiatives, including the UN Group of Governmental Experts (GGE) on LAWS and various non-proliferation treaties, have attempted to establish norms and restrictions on autonomous weapons. However, these efforts remain largely voluntary, with major military powers resisting legally binding commitments [5].

As AI continues to revolutionize warfare, a proactive and enforceable international governance structure is essential to prevent AI-driven conflicts, mitigate risks, and uphold the ethical principles of armed engagement. Without such a framework, the unchecked militarization of AI could lead to unprecedented security dilemmas, technological asymmetries, and erosion of international stability. Therefore, the regulation of AI in security and defense must be treated as a global priority, integrating technical safeguards, diplomatic efforts, and legally binding norms to ensure responsible AI development and deployment in military operations.

 

The struggle for regulation in this domain is significant, revolving around the ethical implications of AI in warfare, the challenges of ensuring IHL compliance by autonomous systems, and the broader societal and humanitarian concerns, such as preserving human dignity and accountability in warfare [10]. Also, AI creates a precision paradox, where more accurate weapons may lead to more attacks, potentially increasing lawful civilian harm [11]. It is argued that these technologies also introduce unique risks, such as programming errors in AI systems leading to deadly accidents, or human operators’ over-reliance on or distrust of automated systems. The use of AI in military systems would necessitate a delicate balance between human oversight and machine autonomy, as both overtrust and undertrust in systems can have fatal consequences [11]. Moreover, these technologies can escalate the scale and speed of conflict, raising the risk of massive destruction. Despite these challenges, current international law lacks adequate mechanisms to hold entities accountable for unintended yet lawful civilian harms in armed conflicts, highlighting a gap in accountability for the consequences of lawful acts in warfare [11]. Furthermore, based on the current level of international cooperation in cyberspace, it is anticipated that collaboration on AI governance might also be limited. In cyberspace, states have opted to maintain silence regarding their capabilities and avoid accountability and attribution through deliberately avoiding international regulations [12]. To mitigate this, the prevailing view among states and international organizations is to use the existing International Law that applies to state activities in information and communication technologies to cyberspace [5, 12].

 

The use of AI in warfare, notably in autonomous systems like drones, requires scrutiny under IHL, balancing technological capabilities with human judgment and moral agency. The lack of a unified approach to AI governance in the international arena, mirrored in the current state of international cooperation in cyberspace, suggests that establishing comprehensive and effective regulatory frameworks might be challenging. These complexities necessitate a nuanced and collaborative approach to ensure AI advancements in defense and security align with ethical standards.

AI Governance Initiatives

National and Regional Initiatives

In the discourse on AI governance and regulation, there is a compelling argument for delineating their respective spheres at different levels of governance. National governance of AI is essential due to the specific social, cultural, economic, and political contexts within individual countries. Each nation possesses unique characteristics and concerns that necessitate a tailored approach to AI governance. This national focus allows for policies and strategies that are closely aligned with the country's specific needs, values, and legal frameworks, ensuring that AI development is harmonious with national priorities and societal norms. Furthermore, national governance can be more agile and responsive to the rapid evolution of AI technologies, adapting quickly to emerging challenges and opportunities.

The National Artificial Intelligence Initiative Office, under the White House Office of Science and Technology Policy, has articulated a comprehensive AI governance framework encapsulating six fundamental pillars, each integral to fostering a responsible and beneficial AI landscape [13]. This framework accentuates “Innovation” by promoting AI as a catalyst for scientific and business advancements. “Trustworthy AI” forms a cornerstone, emphasizing adherence to civil liberties, rule of law, data privacy, and transparency. The “Educating and Training” pillar aims to leverage AI in broadening employment opportunities and access to new industries and educational paradigms. Under “Infrastructure,” the focus is on enhancing accessibility to essential resources like data and computational tools. “Applications” pertain to the widespread integration of AI across various sectors, including healthcare and education, signifying its versatile utility. Lastly, “International Cooperation” underlines the significance of global partnerships and evidence-based, multistakeholder approaches. Complementing these pillars, a robust AI governance framework includes critical components such as “Decision-making and Explainability,” ensuring AI systems are unbiased and their decisions can be understood and trusted. “Regulatory Compliance” is imperative for organizations to meet data privacy norms and manage sensitive information securely. “Risk Management” involves strategic approaches to mitigate potential pitfalls in AI, including data selection, cybersecurity, and bias rectification. Furthermore, “Stakeholder Involvement” is vital, bringing together diverse perspectives [13]. Collectively, these elements constitute a multi-faceted and comprehensive approach to AI governance, pivotal for harnessing AI's potential while safeguarding ethical standards and societal values.

On a global level, efforts in AI regulation and policymaking converge on key themes like transparency, accountability, fairness, privacy, data governance, safety, human-centric design, and oversight, yet their practical implementation presents significant challenges [14]. These regulatory efforts often augment existing laws in data privacy, human rights, cyber risk, and intellectual property but do not fully encompass the complexities of AI. The EU AI Act, poised to be the first comprehensive AI legislation globally, emphasizes a human-centric, risk-based approach [14]. It categorizes certain AI uses as unacceptable or high-risk, such as predictive policing or indiscriminate facial recognition data scraping, demanding stringent compliance for AI systems in critical sectors like law enforcement, education, or employment. Similarly, in the U.S., had proposed legislations like the Algorithmic Accountability Act and AI Disclosure Act, with ex-President Biden having issues an executive order for the safe and trustworthy development of AI [14]. China’s approach includes the State Council’s “New Generation Artificial Intelligence Development Plan” and regulations like the “Interim Administrative Measures for the Management of Generative AI Services.” These developments, alongside initiatives in Canada and Asia, reflect a growing global consensus on the need for comprehensive AI governance, although achieving this in practice remains a formidable task.

Despite growing global discourse on AI governance, the security and defense dimensions remain largely neglected. Most international AI initiatives have focused on civilian applications and algorithmic ethics, with limited attention to the security and defense dimensions of AI. Efforts have largely relied on national strategies and ad hoc conferences, falling short of producing binding global standards. A notable exception occurred in November 2024, when the United States issued a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, just days after President Biden and President Xi publicly agreed that humans, not AI, must retain control over nuclear weapons.

UN-Led Initiatives and the Case for an AIAI

The case for situating AI regulation within the purview of international bodies like the UN stems from the inherently global nature of AI technology. AI's impact transcends national borders, with its applications and implications affecting global issues such as privacy, security, human rights, and international trade. A unified regulatory framework under the auspices of an international organization like the UN can facilitate consistency in standards, promoting a level playing field and mitigating the risks of a regulatory race to the bottom. It also enables collective action in addressing transnational challenges, such as the ethical use of AI in warfare or the global digital divide. Such an approach can harmonize regulations across countries, fostering international cooperation and dialogue, and ensuring that AI is developed and deployed in a manner that respects global norms and principles, while allowing for national particularities.

This is peculiar to International Law and Rules that play a pivotal role in standardizing practices across nations. When international bodies like the UN engage in the formulation of guidelines, principles, or treaties, they set global benchmarks that often guide national efforts. These international standards serve as a template for countries to develop their own regulations, ensuring a degree of uniformity and consistency in approach. This is especially critical in areas where cross-border implications are significant, such as data privacy, cybersecurity, and ethical AI use. By establishing a common framework of principles and norms, international law can help mitigate the risks of fragmented regulatory landscapes, which can lead to inconsistencies and challenges in global cooperation. Moreover, such standardization under the auspices of International Law can facilitate international trade and collaboration in AI, fostering an environment where innovation thrives while adhering to universally recognized ethical and legal standards. This symbiotic relationship between International Law and national regulation is instrumental in shaping a cohesive, responsible, and equitable future for AI technology.

To this end, the UN seems to have been active on this topic for the past few years. Based on a collective call from various stakeholders, for an ethical framework governing AI development, UNESCO took the lead in 2018 [15]. This remains pertinent as it showed the UN’s concern and understanding of the complexity at hand. The overarching goal was to ensure that AI development benefits humanity, with a responsibility to pass on a more just, peaceful, and prosperous society to future generations [15]. Also, the first UN Security Council meeting on Artificial Intelligence was held in July 2023. Following that, during the GA annual meeting, Secretary-General António Guterres expressed their belief in the need for a new U.N. agency dedicated to managing the rapidly evolving and powerful technology of AI [16]. This agency is envisioned as a platform for global cooperation in handling AI-related issues. However, the specific functions and structure of this proposed entity remain undetermined [16].

This would not be the first time that the UN has taken a lead in the regulation of a technological feature. The International Atomic Energy Agency (IAEA), since its establishment in 1957, has been instrumental in guiding the safe and peaceful development of civil atomic capabilities globally [17]. As an autonomous international organization under the guidance of the UN, the IAEA has played a pivotal role in setting international standards for nuclear safety, security, and safeguards. It provides a framework for cooperation in nuclear energy, ensuring that civil atomic programs are not diverted to military use. The IAEA's contributions encompass the development of safety standards, provision of expert guidance, and conducting regular inspections to ensure compliance with non-proliferation treaties. States have been motivated to cooperate with the IAEA primarily due to the mutual benefits of shared knowledge, technical expertise, and the global desire to prevent the proliferation of nuclear weapons. This cooperation is underpinned by the understanding that nuclear technology poses unique risks that require stringent oversight and that such oversight is best achieved through a collaborative, international approach. By fostering an environment of transparency and accountability, the IAEA has significantly contributed to international peace and security, while enabling the responsible exploitation of nuclear energy for civil purposes.

AI appears to be at a similar juncture as nuclear technology once was, necessitating a degree of international regulation to ensure its promising potential is harnessed in a structured and safe manner. The concept of an International Artificial Intelligence Agency (IAIA) under the auspices of the United Nations, similar to the IAEA, presents a strategic framework for enhancing global governance in the field of AI [18].

The Feasibility of IAIA

However, the establishment of an IAIA under the United Nations framework would face multiple obstacles, underscored by geopolitical and bureaucratic complexities [19].

Political Landscape

Primarily, the principle of national sovereignty stands as a significant impediment. States, particularly those with advanced AI capabilities, may exhibit reluctance to subordinate their authority in AI policymaking to an international entity, fearing erosion of their technological and economic autonomy. The heterogeneity in regulatory philosophies across different states further compounds the challenge. Aligning disparate national approaches to AI governance, each molded by unique socio-economic, cultural, and political landscapes, into a unified international standard is an intricate endeavor that is fraught with potential discord. This diversity is mirrored in the technological disparities between developed and developing nations, presenting additional difficulties in ensuring equitable and universally beneficial outcomes from such an agency.

Geopolitical tensions and the competitive nature of global AI development pose another challenge. The existing landscape of AI is characterized by a high-stakes race for technological, economic, and military supremacy among powers, notably the U.S. and China. These dynamics significantly hinder collaborative and consensus-driven efforts within the agency and the UN more broadly. In fact, complications related to intellectual property, access to technology, and the vested interests of private corporations leading AI innovation also pose considerable hurdles. Balancing commercial interests with the public welfare objectives of an international regulatory body, while navigating issues of intellectual property rights, could limit the scope of cooperation and transparency. When accounting for the military usages of AI, the security tensions among major power, and the lack of trust among players, an AI regulation might be seen as limiting the defensive or offensive capacity of some, while allowing others to undermine regulation and gain ground in this competition.

It is also worth noting that, unlike its leadership role in establishing the IAEA, the United States' engagement in AI governance and regulation may be more restrained amid shifting global dynamics, potentially limiting international momentum. As a result, an IAIA may not benefit from the same level of U.S. backing that once propelled nuclear governance forward.

UN Structural Limitations

The bureaucratic limitations inherent in the UN system cannot be overlooked. Achieving consensus among UN member states on the formation of IAIA and the delegation of adequate authority and autonomy to it represents a formidable challenge. The UN’s consensus-driven decision-making process, often marred by geopolitical influences, could impede the establishment and effective functioning of IAIA. This would also affect enforcing compliance with the agency’s regulations presents a distinct challenge, especially in the absence of a robust international enforcement mechanism and considering the varying capacities and willingness of national regulatory bodies. Financial constraints and resource allocation pose another critical barrier. The establishment and sustenance of the IAIA would necessitate substantial and consistent financial contributions from member states, which may be challenging to secure, especially in a context where the direct benefits of such an organization could be questioned or undervalued by some states. Lastly, the rapid pace of AI development necessitates an agency that is agile and responsive, a requirement that often conflicts with the typically slow and deliberative nature of large international organizations. Regulations need to be stringent enough to deter the misuse of AI, while simultaneously maintaining enough flexibility to foster ongoing innovation in the field.

The most intricate governance issues arise from AI's use in defense and security, such as autonomous drones and enhanced military command-and-control systems [20]. The international policy frameworks addressing these issues are nascent and necessitate normative changes, historically difficult for states to achieve. Moreover, the rivalry among major powers now encompasses the technological domain, increasing risks to international stability. The relationship between AI and automated warfare has been studied especially in the context to understand AI's weaponization within the current global governance framework, its future advancements in defense and security, its impact on geopolitics, and the moral and ethical dilemmas posed by autonomous weapons [10]. Particularly through machine learning, AI is a potent enabler of weapon autonomy. However, the debate in the UN, centered around International Humanitarian Law (IHL), focuses less on the technology itself and more on the human judgment and moral agency in warfare [5, 10].

 

A Modular Framework to Overcome Challenges

The conceptualization of IAIA, an autonomous membership-based organization operating in coordination with the UN, proposes a comprehensive and cooperative approach to global AI governance. The establishment of such an agency would be a pivotal step in addressing the multifaceted challenges and opportunities presented by AI technologies. Its mandate would encompass the development and enforcement of international AI standards grounded in universally accepted ethical principles, focusing on data privacy, algorithmic bias, transparency, and accountability. Much like the IAEA, the IAIA would serve as a global benchmark-setter, guiding states to align AI development with human rights and ethical imperatives,

In addition to norm-setting, the IAIA would offer technical assistance, research facilities, and capacity building, especially for countries with emerging technological infrastructures, helping to bridge the global digital divide. The agency would conduct systematic compliance audits and assessments, creating a layer of trust and verification essential to international cooperation. As a central hub for ethical AI research and global dialogue, the IAIA would bring together governments, academia, private industry, and civil society to coordinate innovation and regulatory coherence. In doing so, it would promote equitable access to AI technologies and ensure that advancements serve the collective interest.

Importantly, the IAIA would not impose a rigid, top-down model of regulation. Instead, it would implement a modular and decentralized governance framework, designed to reflect geopolitical realities and the unique characteristics of AI. The rapid militarization of AI and its increasing integration into security and defense strategies have necessitated urgent and enforceable governance mechanisms. Unlike nuclear technology, which is centralized and physically constrained, AI is an intangible and widely distributed technology that can be developed, deployed, and modified across borders with minimal oversight. Current global governance models for AI largely depend on voluntary ethical guidelines and fragmented national regulations, which lack binding enforcement mechanisms and fail to address AI’s security risks comprehensively. The absence of a structured and verifiable governance framework leaves a significant regulatory gap, increasing the risk of unchecked AI weaponization, unintended algorithmic escalation in conflicts, and geopolitical instability.

To mitigate these challenges, a modular AI governance framework, under a United Nations–led IAIA initiative, is proposed to operate through regional enforcement hubs that integrate Confidence-Building Measures (CBMs), implement AI compliance audits, and ensure human oversight on AI in military and security domains.

1. Regional AI Governance Hubs

Recognizing the political resistance that a centralized AI governance model might face, the framework adopts a decentralized enforcement structure through regional AI governance hubs. These hubs, operating under the IAIA, would be tailored to geopolitical realities, with regional organizations such as the European Union, the Association of Southeast Asian Nations (ASEAN), and the African Union (AU) establishing localized AI security task forces. These regional hubs would facilitate the implementation of global AI norms while allowing flexibility for region-specific concerns. The decentralized model not only reduces sovereignty concerns but also enhances cooperation by placing governance responsibilities closer to the regions where AI security challenges arise.

Regional AI governance hubs would play a critical role in monitoring the application of AI in security and defense. They will also take part in mediating AI-related diplomatic conflicts, acting as intermediaries to de-escalate security risks before they spiral into broader geopolitical confrontations. By facilitating early warnings, AI risk assessments, and real-time diplomatic communication channels, these hubs could prevent miscalculations and foster AI security dialogues between nations. Unlike a centralized global AI regulatory body, which faces resistance from major power, this regionalized enforcement model ensures that AI governance remains responsive, adaptive, and capable of mitigating AI-induced geopolitical crises before they escalate into full-scale conflicts.

These hubs will also play a critical role in AI research and contribute to the regulation of AI across other sectors beyond defense. They will help ensure that the use of AI technologies and the underlying algorithms respect human rights and adhere to international standards across various industries. Moreover, these hubs will be instrumental in guiding the civilian dimension of the AI and Big Data revolution, helping to ensure that data governance and privacy protections are upheld as the world transitions deeper into this new era.

2. Confidence-Building Measures

The proposed framework draws inspiration from the work of the IAEA but is specifically adapted to AI’s unique challenges. Unlike nuclear material, AI technologies are not physically constrained, making traditional arms control treaties ineffective. Instead, the governance model relies on CBMs as foundational mechanisms to enhance transparency and prevent conflict escalation. One of the key confidence-building measures would be the creation of an international AI transparency registry, where states and private actors involved in military AI development disclose their AI-enabled defense capabilities. This registry would classify AI applications based on their risk levels, distinguishing between general-purpose AI, AI-enhanced command-and-control platforms, and fully autonomous weapon systems. Furthermore, countries would be required to notify international governance-bodies before deploying AI-enabled military systems in conflict zones, ensuring that the risks of unintended escalation are mitigated. Joint AI safety exercises, modeled after military arms control verification mechanisms, would allow for adversarial testing of AI decision-making in simulated high-stakes scenarios, ensuring that deployed AI systems remain aligned with international norms and ethical considerations.

3. Human Oversight to Prevent Algorithmic Escalation

A main pillar of this framework is the establishment of international norms that ensure meaningful human oversight over AI decision-making in military operations. This could take the form of a UN General Assembly resolution which, despite being non-binding, could over time crystallize into international law. Human-in-the-loop (HITL) safeguards would be enshrined as a fundamental principle, ensuring that no AI system can autonomously authorize the use of lethal force. AI may assist in decision-making and targeting recommendations, but human operators would retain final authority in all high-risk engagements. A legally binding prohibition on AI-enabled nuclear launch decision-making would be instituted to prevent algorithmic escalation in nuclear deterrence frameworks. Furthermore, ethical oversight boards, composed of AI experts, military strategists, and international legal scholars, would be tasked with evaluating emerging AI technologies to determine their compliance with humanitarian laws and security protocols. These measures would collectively serve to prevent the development of fully autonomous AI combat systems that could act beyond human control, thereby reducing the risk of catastrophic failures in wartime decision-making.

4. Adherence to International Law

Regional hubs and the IAIA will play a central role in ensuring AI compliance with existing International Law and International Humanitarian Law (IHL), that requires adapting these legal frameworks to the realities of modern technology, without altering their core principles. The foundational values of IHL, such as distinction, proportionality, necessity, and the protection of human dignity, remain fully applicable. It is AI technologies that must be shaped and constrained to operate within these established legal and ethical boundaries, not the other way around. Adapting the application of existing law to AI in warfare is therefore essential to uphold human rights, ensure accountability, and prevent violations of humanitarian norms.

5. AI Audit System & Enforcement Mechanisms

Beyond transparency measures, effective AI governance requires robust audit mechanisms that ensure states are held accountable to shared international standards. Under the proposed IAIA framework, a comprehensive global AI audit and compliance system would be established to monitor and verify the responsible development and use of military AI technologies. Through the regional hubs, this system would go beyond self-reporting, mandating periodic compliance audits for all member states engaged in developing or deploying AI-enabled defense applications. These audits would include independent third-party verification of AI weaponization claims and system behaviors, ensuring that state actors are not bypassing agreed norms or masking prohibited capabilities. Crucially, the audit regime would integrate legal oversight mechanisms to assess compliance with International Humanitarian Law (IHL) and broader international legal obligations. In parallel, the IAIA would implement a monitoring mechanism to uphold Human-in-the-Loop (HITL) standards. An effective enforcement mechanism remains the biggest weakness.

 

Therefore, the proposed governance model offers a scalable, adaptive approach to AI security by incorporating confidence-building measures, independent audits, regional hubs, and human oversight. Its modular design enables states to adopt commitments gradually, balancing transparency with strategic interests. Unlike rigid treaties, this framework supports a tiered implementation process, beginning with basic transparency and expanding over time. Establishing the IAIA with decentralized enforcement is essential to promote responsible military AI development and safeguard global security.

Limitations

A major limitation of this paper and the proposed framework is the lack of analysis regarding enforcement mechanisms that IAIA could utilize. While traditional UN tools, such as sanctions, monitoring mechanisms, and Security Council resolutions, could technically apply, today’s geopolitical landscape is marked by fragmentation and a lack of political will. These enforcement mechanisms depend on broad international cooperation, which is increasingly difficult to secure. Enforcement remains a fundamental weakness of international law, as it relies on states voluntarily upholding and abiding by norms. As a voluntary membership-based organization, the IAIA’s own enforcement capacity remains uncertain and should be further explored within the evolving scope of UN enforcement mechanisms.

Moreover, given current geopolitical dynamics, the U.S. is unlikely to offer the same level of support to AI oversight as it did to the IAEA. This is why the UN must rely on the EU and its Brussels Effect, the EU is well-positioned to lead transnational initiatives to set global AI standards in security and defense. The EU, despite excluding military AI from its AI Act, is uniquely positioned to lead global efforts in setting ethical standards and governance norms for military AI [21]. With its regulatory experience and collaborative approach, the EU can bridge gaps between national security interests and international coordination to promote responsible AI use in defense [21].

To note, the IAEA was established at a time when nuclear technology had already matured, and the global race was primarily about the number of warheads. In contrast, military AI is still in its early stages of development. This highlights a key difference between the contexts of the IAEA and a potential AI oversight body, the former emerged when the technology’s capabilities were well understood, while the latter must grapple with a still-evolving field. Major powers are unlikely to regulate a technology they are still exploring, especially when doing so, could limit strategic advantage. They will only support regulation when it aligns with their national interests. However, the broader applicability of AI compared to nuclear technology makes early regulation even more urgent. Even if global consensus is premature, it is essential to start laying the foundations now, for when the political conditions for international cooperation emerge, the governance structures will already be in place. (See Annex.)

 

Conclusion: Reality Check

AI's transformative impact is comparable to the Industrial Revolution, bringing both positive and negative changes. AI's influence spans various sectors, including politics, economy, and education [22]. However, current global approaches to AI governance and data regulation are fragmented and lack a unified framework [22]. Firstly, the international landscape is marked by diverse geopolitical interests and strategic priorities, making consensus on AI governance especially difficult. Countries hold differing views on AI’s role in defense, security, economic competition, and its broader societal implications, complicating efforts to establish a universally accepted regulatory framework. Yet the risks posed by unregulated AI are widely recognized, particularly in military and security domains.

Consequently, there is a growing collective interest in ensuring the safe, transparent, and responsible development of AI technologies. The need to regulate AI, big data, and especially AI applications in defense is becoming increasingly urgent. Aligning with international AI standards not only helps mitigate these risks but also opens new avenues for economic growth, innovation, and cross-border collaboration. This is where the International Artificial Intelligence Agency (IAIA) becomes essential. As a UN-coordinated autonomous body, the IAIA would play a pivotal role in bridging the global technological divide, ensuring that AI advancements do not deepen inequalities but instead foster equitable development. More importantly, it would help enforce ethical standards in military AI applications, preserving meaningful human control and preventing the misuse of autonomous systems in warfare. Therefore, the conceptualization of the IAIA under the UN framework proposes a comprehensive, cooperative, and modular approach to AI governance, one that mirrors the success and legacy of the IAEA in the nuclear domain. This initiative would represent a vital step toward ensuring that AI development remains globally coordinated, ethically grounded, and aligned with shared interests.

In conclusion, advancing a global governance framework for military AI is not just a regulatory necessity, it is a strategic, moral, and diplomatic imperative. The proposed IAIA represents a vital step toward building a proactive and modular governance model. Establishing this framework now will ensure the international community is prepared when the conditions for collective action arise, safeguarding global security, preserving human dignity, and guiding the future of AI toward the common good.

Annex

Title: Technology Life Cycle & Capability Milestones: Nuclear Energy vs Artificial Intelligence.

This chart offers a conceptual comparison of Nuclear Energy and Artificial Intelligence, using two synchronized vertical axes to visualize both their lifecycle progression and capability evolution. The left Y-axis represents the classic Technology Life Cycle, tracing the arc from research and development to widespread growth, maturity, and potential decline. The right Y’-axis maps the development of each technology’s intrinsic capabilities, from early functionality to full potential and a conceptual post-capability phase, a point where innovation slows, and governance becomes the central focus.

The core hypothesis illustrated here is that nuclear regulation emerged only after the technology had fully demonstrated its destructive and civilian potential, notably by the mid-1950s, with the IAEA founded in 1957. In contrast, when we examine the trajectory of AI, we see that it is still in an early growth stage, and its full capabilities remain undefined. This suggests that regulatory responses in AI may be forming prior to or during capability realization, a potentially unprecedented shift in the tech-policy dynamic.

This observation is illustrative rather than definitive and would benefit from further research into the lifecycle stages of each technology, the alignment (or misalignment) between technological maturity, and institutional response timing and forecasting.

References

1-     Akhgar B, et al. Application of Big Data for National Security. A Practitioner’s Guide to Emerging Technologies. Butterworth-Heinemann, 2015, pp 1,15.

2-     Camilleri, Mark Anthony. “Artificial Intelligence Governance: Ethical Considerations and Implications for Social Responsibility.” Expert Systems, vol. 2023, 18 July 2023, doi:10.1111/exsy.13406

3-     Marwala, Tshilidzi. “The Militarization of AI Has Severe Implications for Global Security and Warfare.” United Nations University, 24 July 2023, unu.edu/article/militarization-ai-has-severe-implications-global-security-and-warfare.

4-     Uzer, Mehmet Akif. “The Integration of AI in Modern Warfare: Ethical, Legal, and Practical Implications.” Cyber Intelligence & Security, CYIS, 24 Sept. 2024, www.cyis.org/post/the-integration-of-ai-in-modern-warfare-ethical-legal-and-practical-implications.

5-     Perrin, Benjamin. “Lethal Autonomous Weapons Systems & International Law: Growing Momentum Towards a New International Treaty.” American Society of International Law (ASIL) Insights, vol. 29, 24 Jan. 2025

6-     Klaus, Matthias. “Transcending Weapon Systems: The Ethical Challenges of AI in Military Decision Support Systems.” ICRC Law & Policy Blog, 24 Sept. 2024, blogs.icrc.org/law-and-policy/2024/09/24/transcending-weapon-systems-the-ethical-challenges-of-ai-in-military-decision-support-systems.

7-     Batallas, Carlos. “When AI Meets the Laws of War.” IE Insights, 3 Oct. 2024, www.ie.edu/insights/articles/when-ai-meets-the-laws-of-war/.

8-     Jensen, Benjamin, Yasir Atalan, and Jose M. Macias III. Algorithmic Stability: How AI Could Shape the Future of Deterrence. Center for Strategic and International Studies (CSIS), 10 June 2024, www.csis.org/analysis/algorithmic-stability-how-ai-could-shape-future-deterrence.

9-     Schmid, Stefka, Daniel Lambach, Carlo Diehl, and Christian Reuter. “Arms Race or Innovation Race? Geopolitical AI Development.” Geopolitics, 2025, pp. 1–30, doi.org/10.1080/14650045.2025.2456019.

10-   Barney, Nick, and Sarah Lewis. “What Is Artificial Intelligence (AI) Governance?” TechTarget, May 2023, www.techtarget.com/searchenterpriseai/definition/AI-governance.

11-   Bastit, Bruno. “The AI Governance Challenge.” S&P Global, 29 Nov. 2023, www.spglobal.com/en/research-insights/featured/special-editorial/the-ai-governance-challenge.

12-   Azoulay, Audrey. “Towards Ethics of Artificial Intelligence.” United Nations Chronicle, vol. LV, nos. 3 & 4, Dec. 2018, www.un.org/en/chronicle/article/towards-ethics-artificial-intelligence.

13-   Henshall, Will. “How the U.N. Plans to Shape the Future of AI.” Time, 21 Sept. 2023, www.time.com/6316503/un-ai-governance-plan-gill/.

14-   “Statute of the IAEA.” International Atomic Energy Agency, www.iaea.org/about/statute.

15-   Marcus, Gary, and Anka Reuel. “The World Needs an International Agency for Artificial Intelligence, Say Two AI Experts.” The Economist, 18 Apr. 2023, www.economist.com/by-invitation/2023/04/18/the-world-needs-an-international-agency-for-artificial-intelligence-say-two-ai-experts.

16-   The world needs an international agency for artificial intelligence, say two AI experts Gary Marcus and Anka Reuel Apr 18th 2023 https://www.economist.com/by-invitation/2023/04/18/the-world-needs-an-international-agency-for-artificial-intelligence-say-two-ai-experts

17-   Momani, Bessma, Aaron Shull, and Jean-François Bélanger. “Introduction: The Ethics of Automated Warfare and AI.” Centre for International Governance Innovation, 28 Nov. 2022, www.cigionline.org/articles/introduction-the-ethics-of-automated-warfare-and-ai/.

18-   Sauer, Frank. “Autonomy in Weapons Systems and the Struggle for Regulation.” Centre for International Governance Innovation, 28 Nov. 2022, www.cigionline.org/articles/autonomy-in-weapons-systems-and-the-struggle-for-regulation/.

19-   Crootof, Rebecca. “AI and the Actual IHL Accountability Gap.” Centre for International Governance Innovation, 28 Nov. 2022, www.cigionline.org/articles/ai-and-the-actual-ihl-accountability-gap/.

20-   Hollis, Duncan. “A Brief Primer on International Law and Cyberspace.” Carnegie Endowment for International Peace, 14 June 2021, www.carnegieendowment.org/2021/06/14/brief-primer-on-international-law-and-cyberspace-pub-84763.

21-   Csernatoni, Raluca. "Governing Military AI Amid a Geopolitical Minefield." Carnegie Europe, 17 July 2024, carnegieendowment.org/research/2024/07/governing-military-ai-amid-a-geopolitical-minefield?lang=en.

22-   Samson, Paul. “On Advancing Global AI Governance.” Centre for International Governance Innovation, 1 May 2023, www.cigionline.org/articles/on-advancing-global-ai-governance/.

Next
Next

Tech Policy 2