As artificial intelligence evolves rapidly, the UN introduces global governance efforts, but questions arise over whether diplomacy can keep pace with emerging risks.
![]() |
| The UN is entering the AI arena with new oversight bodies, but experts fear these efforts may be outpaced by the very technology they aim to control. Image: CH |
New York, United States — September 25, 2025:
Artificial Intelligence has officially been elevated to the level of global diplomacy. This week, during its high-level meetings in New York, the United Nations placed AI alongside climate change, nuclear arms, and pandemics as a defining challenge of our era. But while world leaders move to confront the growing power of artificial intelligence, many are questioning whether the UN can regulate it in time—if at all.
The General Assembly’s recent approval to establish a Global Forum on AI Governance and a new 40-member scientific panel has been welcomed as a milestone. UN Secretary-General António Guterres is set to launch the forum this week. The panel, modeled on the climate-focused IPCC, will include experts from both developed and developing nations, but its first formal sessions are not scheduled until 2026 in Geneva, with a follow-up in New York in 2027.
These new mechanisms are being celebrated for their inclusivity, but critics argue they may already be outdated before they begin. While UN diplomats take years to build consensus, AI development has moved at breakneck speed. Since the introduction of ChatGPT in late 2022, the world has seen an unprecedented acceleration in generative AI capabilities, prompting both excitement and alarm. With advanced models now capable of writing code, generating images, manipulating video, and even imitating human speech convincingly, calls for urgent safeguards have grown louder.
Inside the AI industry itself, voices are increasingly advocating for international regulation. Executives and researchers from OpenAI, Google DeepMind, and Anthropic have urged global governments to define clear red lines and adopt legally binding agreements to prevent catastrophic misuse. Among their concerns are AI’s potential role in enabling engineered pandemics, autonomous weaponry, and large-scale disinformation campaigns.
One of the most vocal experts, UC Berkeley professor Stuart Russell, believes the path forward is clear. He argues that AI should be treated like nuclear energy or pharmaceuticals, where developers must prove safety as a condition for public deployment. Rather than locking the world into rigid rules, he proposes a flexible global framework, adaptable to new developments as they arise, much like how international aviation is regulated through the UN-affiliated International Civil Aviation Organization.
Despite the ambition, skepticism lingers. Previous global AI summits hosted by the UK, France, and South Korea produced only voluntary pledges, lacking enforcement power. The UN’s new approach, while more structured, still faces the same fundamental problem: a lack of binding authority. Critics warn that without clear accountability, these governance structures risk becoming symbolic gestures, outpaced by the very technologies they seek to contain.
What makes the challenge even more urgent is the closing window of opportunity. AI’s influence is rapidly expanding into military systems, financial markets, healthcare, and governance. As the technology begins to shape the rules of global power, the institutions designed to regulate it are struggling to catch up.
Still, the inclusion of AI on the UN’s main stage signals a shift in global consciousness. There is now broad recognition that AI is not just a technical issue—it is a geopolitical one. The stakes are no longer about innovation alone, but about how power, safety, and human values are protected in an age of intelligent machines.
If the UN can seize this moment, its efforts may lay the foundation for a new era of responsible technology. But if it fails to act with urgency and authority, artificial intelligence may soon evolve beyond the reach of any global institution.
As Stuart Russell warned, “If world leaders don’t govern AI now, AI may soon govern them.”
