UNICC AI Hub Launched

In a landmark step toward AI coordination across the UN system, the United Nations International Computing Centre (UNICC) launched a dedicated Artificial Intelligence Hub in June 2025. The AI Hub will serve as a central platform to advance AI adoption across UN agencies, facilitating secure, inclusive, and ethical AI practices in line with the UN Charter and the Sustainable Development Goals.

Announced at the annual UN Digital Transformation Conference in Valencia, Spain, the hub is based in Geneva, with satellite teams in New York, Rome, and Nairobi. It is designed to offer technical expertise, shared computing infrastructure, policy alignment, and inter-agency project support for AI use cases ranging from humanitarian logistics to climate modeling.

“Our agencies have diverse mandates, but we all face common questions around responsible AI deployment,” said Sameer Chauhan, Director of UNICC. “The AI Hub enables us to collaborate across institutional silos and ensure the technologies we use reflect our values.”

UNICC has been developing AI tools and platforms for the UN for several years, including AI-based document processing for the World Food Programme, image recognition software for UNOSAT, and predictive analytics for refugee movement under UNHCR. The new hub builds on that legacy with a stronger governance framework, dedicated staff, and a public transparency portal.

The AI Hub’s launch aligns with broader UN initiatives, including the Secretary-General’s Global Digital Compact and the recent establishment of the Office of Digital and Emerging Technologies. Its key priorities include:

  • AI literacy and capacity-building for UN staff
  • Developing open-source, multilingual, ethical-by-design AI models
  • Supporting UN partners with risk assessment, data governance, and model validation
  • Encouraging collaboration with academic, civic, and tech sector stakeholders

Civil society groups have cautiously welcomed the move. “This is an important step toward building trustworthy AI inside one of the world’s largest institutional networks,” said Helen Keita of AlgorithmWatch. “But the hub’s success will depend on its transparency and the degree to which it involves external watchdogs and underrepresented communities.”

The UNICC AI Hub is also expected to publish biannual reports outlining use cases, challenges, and successes, along with internal audits of fairness, accountability, and environmental impact.

As more global institutions turn to AI for operational optimization and decision support, the UN’s AI Hub may serve as a testbed for public-sector responsibility at scale.

🔗 Sources:

G7 Hiroshima AI Process

The G7 Hiroshima AI Process, initiated during the 2023 summit in Japan, has emerged as one of the most influential global platforms for coordinating democratic approaches to artificial intelligence governance. Backed by the leaders of Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union, the initiative aims to promote shared values and responsible AI practices among advanced economies.

At the heart of the Hiroshima Process is a voluntary **Code of Conduct for Advanced AI Systems**, published in late 2024. The code outlines 11 principles that signatory governments and tech companies are encouraged to implement, including fairness, transparency, accountability, data privacy, and protections against manipulation or misuse.

“This process is about proving that democracy can still lead in the age of AI,” said a G7 communique issued during the summit. “Our shared values are our strength — and they must be reflected in how these technologies are governed.”

The Code of Conduct includes specific guidance on:

  • Testing and evaluation of high-risk AI models
  • Disclosure of training data sources
  • Red-teaming and safety checks before public deployment
  • Reporting obligations for developers and deployers

While the framework is non-binding, it has already influenced national policies. For instance, the UK’s Frontier AI Taskforce and the U.S. Executive Order on AI both echo Hiroshima principles. Similarly, the European Commission is integrating the code’s recommendations into guidance for the EU AI Act’s rollout.

Industry response has been mixed. Microsoft, IBM, and Google have voiced support and committed to elements of the code. However, civil liberties groups like the Electronic Frontier Foundation have warned that the voluntary approach may not offer sufficient protections in areas like facial recognition or algorithmic discrimination.

Beyond regulation, the Hiroshima AI Process includes a knowledge-sharing platform where G7 states exchange best practices on AI oversight, auditing, and incident response. It also supports international research collaboration, with Japan proposing the establishment of an AI Safety Research Hub in Tokyo in partnership with the OECD.

Looking forward, the G7 countries plan to review the code’s implementation in 2026 and expand engagement with Global South partners. The goal is to elevate the Hiroshima Process as a benchmark for inclusive and democratic AI governance at a time when geopolitical competition around tech standards is intensifying.

🔗 Sources:

European Union, AI Convention – Goals, Parameters, Ambitions

The European Union has formally joined the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights — the world’s first legally binding international treaty on AI. Signed in Strasbourg in September 2024 and ratified by the EU in early 2025, the convention represents a landmark moment for global digital governance.

The agreement is the culmination of nearly three years of negotiations between 46 Council of Europe member states, the EU, and observers including the United States, Canada, and Japan. Its stated goal is to ensure that the development and use of artificial intelligence systems uphold democratic values, human rights, and the rule of law.

“The AI Convention places fundamental rights at the core of how we govern algorithmic systems across borders,” said Christel Schaldemose, a Danish MEP who participated in the European Parliament’s digital affairs committee. “It’s a foundational baseline that builds trust — not just in the EU, but globally.”

The Convention outlines four core obligations for signatories:

  1. Legal safeguards to prevent AI-related discrimination and harm.
  2. Transparency and explainability of high-risk AI systems.
  3. Impact assessments on human rights and democratic freedoms.
  4. Public oversight and access to remedies in cases of AI misuse.

Crucially, the treaty covers both public and private actors — meaning that even companies developing AI applications can be held accountable under international norms if they operate in participating states. Enforcement, however, remains with national authorities and legal systems.

The EU’s endorsement of the Convention complements its own AI Act, passed in 2024. While the AI Act provides a regulatory framework within the Union, the Convention establishes an interoperable legal floor for AI governance across Europe and beyond.

Civil society and advocacy organizations have cautiously welcomed the agreement. Amnesty International praised its human rights framing, though criticized a lack of mandatory bans on biometric surveillance. Meanwhile, business groups have expressed concerns over regulatory fragmentation.

The European Commission has committed to supporting the Convention’s implementation by funding digital rights training, promoting algorithmic transparency projects, and ensuring that SMEs receive technical guidance.

Experts say the treaty could become a model for similar pacts elsewhere — especially in Africa and Latin America, where regional AI regulation is still in its early stages.

🔗 Sources:

 

United Nations and the UN Global Digital Compact

The United Nations’ evolving digital agenda reached a key milestone with the formation of the Office of Digital and Emerging Technologies — a dedicated unit within the UN Secretariat tasked with guiding and coordinating digital policy implementation. This institutional step reflects the growing urgency of the UN’s Global Digital Compact, a proposed framework to shape the future of the digital world in line with UN values.

The Office was announced in mid-2025 as part of a broader realignment of the UN’s technology functions. It operates under the leadership of the Envoy on Technology, working alongside UNDP, ITU, and UNESCO to harmonize ethical AI development, data governance, and digital inclusion efforts.

“We are entering a phase where digital policy must be proactive, inclusive, and rights-based,” said Amandeep Singh Gill, the UN Tech Envoy. “The new office allows us to better coordinate system-wide digital efforts and act as a bridge between governments, civil society, and the private sector.”

The Global Digital Compact, first proposed by the UN Secretary-General in his ‘Our Common Agenda’ report, seeks to address global gaps in connectivity, safeguard human rights online, and ensure responsible use of AI. With its implementation targeted for the Summit of the Future, the compact includes principles for universal digital inclusion, protecting data rights, and securing digital public goods.

The new Office will help monitor and support the compact’s execution, especially in areas of cross-cutting importance like emerging technologies (AI, quantum computing, IoT), cybersecurity norms, and open science.

In its first six months, the office has prioritized three workstreams:

1. Supporting Member States in developing national digital compacts
2. Convening stakeholders for multilateral discussions on AI standards
3. Advancing a shared UN-wide strategy for trustworthy digital infrastructure

Civil society groups have welcomed the development, though some caution that implementation must go beyond declarations. “This is a good step, but we need meaningful accountability mechanisms built into the Compact and its rollout,” said Anri Khachatryan of Access Now.

The Office of Digital and Emerging Technologies is expected to play a central role in shaping multilateral negotiations on AI and other technologies in 2026 and beyond.

🔗 Sources: