The United Nations has created two new international bodies to govern artificial intelligence development, marking the most comprehensive global effort to regulate AI technology since ChatGPT sparked worldwide adoption three years ago.
The U.N. General Assembly adopted a resolution last month establishing a Global Dialogue on AI Governance forum and an independent scientific panel of experts to address AI’s rapid advancement and potential risks, including engineered pandemics, large-scale misinformation campaigns, and autonomous systems operating beyond human control.
Secretary-General António Guterres will launch the governance forum Thursday during the annual high-level U.N. meeting, creating a venue for governments and stakeholders to discuss international cooperation and share regulatory solutions. The forum will meet formally in Geneva next year and New York in 2027.
The scientific panel will include 40 experts with two co-chairs, one from a developed nation and one from a developing country, drawing comparisons to the U.N.’s climate change panel structure. Recruitment for panel positions is expected to begin soon.
A U.N. Security Council meeting Wednesday will address how the Council can ensure AI applications comply with international law and support peace processes while preventing conflicts.
The new governance architecture represents the latest multilateral effort to regulate AI development, following three non-binding AI summits organized by Britain, South Korea, and France that produced only voluntary pledges.
Isabella Wilkinson, a Chatham House research fellow, characterized the bodies as “a symbolic triumph” and “by far the world’s most globally inclusive approach to governing AI.” However, she warned that “in practice, the new mechanisms look like they will be mostly powerless,” questioning whether the U.N.’s administrative structure can effectively regulate rapidly evolving technology.
A group of influential AI experts has called for governments to establish binding “red lines” for AI development by the end of next year, arguing the technology requires “minimum guardrails” to prevent “the most urgent and unacceptable risks.”
The expert group includes senior employees from OpenAI, Google’s DeepMind, and Anthropic, who advocate for an internationally binding AI agreement similar to treaties banning nuclear testing and biological weapons.
Stuart Russell, a UC Berkeley computer science professor and director of the Center for Human Compatible AI, suggested requiring developers to prove safety before market access, similar to pharmaceutical and nuclear power regulations.
“The idea is very simple,” Russell said. “As we do with medicines and nuclear power stations, we can require developers to prove safety as a condition of market access.”
Russell proposed that U.N. AI governance could mirror the International Civil Aviation Organization, which coordinates safety regulators across countries to ensure unified standards.
Rather than rigid regulations, diplomats could develop a “framework convention” flexible enough to accommodate AI’s continuous technological advances, according to Russell’s recommendations.
The initiative addresses concerns from experts who warn that tech companies are racing to develop increasingly powerful AI systems without adequate safeguards, potentially creating existential threats to humanity.
The governance bodies face the challenge of regulating technology that evolves faster than traditional international diplomatic processes, requiring innovative approaches to maintain relevance and effectiveness.