OECD Framework for the Classification of AI Systems

In early 2025, the Organisation for Economic Co-operation and Development (OECD) unveiled its Framework for the Classification of AI Systems, a landmark effort to develop a common, globally applicable taxonomy for artificial intelligence. The goal is to provide a structured approach to understanding the functions, risks, and capabilities of AI systems, helping policymakers, developers, and regulators navigate the increasingly complex AI landscape.

The framework builds upon the OECD’s existing AI Principles, adopted in 2019, and aligns with the organization’s broader work on trustworthy AI, digital policy, and algorithmic accountability. It also responds to growing calls from the G7, EU, and UN bodies for interoperable frameworks that can facilitate regulatory coordination across jurisdictions.

Key dimensions of the classification framework include:

  • Context of use: Sector and application (e.g., healthcare, finance, justice)
  • Impact: Potential for harm or benefit to individuals and society
  • Autonomy: Level of decision-making independence from human oversight
  • Data sensitivity: Nature and volume of data used (e.g., biometric, behavioral)
  • Adaptability: Static vs. learning (self-updating) systems
  • Transparency: Level of explainability and auditability

Each AI system can be classified using this multidimensional model, allowing stakeholders to assess risk, define responsibilities, and guide the appropriate legal or ethical response. For example, an adaptive, opaque AI system used in criminal sentencing would fall into a higher-risk category than a static, explainable model used for inventory tracking.

OECD officials emphasize that the framework is not a regulation but a tool to promote international coherence. “It’s about providing common language and comparability across sectors and countries,” explained Karine Perset, head of the OECD AI Policy Observatory. “We want to avoid fragmentation and duplication.”

The framework has already gained support from several key players, including Canada, Japan, and the European Commission. It has also been cited in discussions at the G7 Hiroshima AI Process, where countries agreed to explore shared risk classification methods.

Critics argue the voluntary nature of the framework may limit its impact, but others see it as a necessary prelude to enforceable norms. “Classification is the foundation for any meaningful governance,” said Michael Veale, a digital rights scholar. “You can’t regulate what you don’t understand.”

The OECD plans to update the framework regularly and integrate it into future workstreams on AI testing, certification, and public procurement. A technical guidance document and pilot platform for developers are expected by the end of 2025.

🔗 Sources:
– [OECD AI Policy Observatory: https://oecd.ai](https://oecd.ai)
– [OECD Framework Announcement: https://www.oecd.org/digital/ai](https://www.oecd.org/digital/ai)

EU’s AI Convention 

In May 2024, the Council of Europe adopted the world’s first binding international treaty on artificial intelligence, marking a turning point in global AI governance. Known formally as the ‘Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law,’ the agreement was negotiated among 46 Council of Europe member states and observers, including the U.S., Canada, Israel, and Japan.

The Convention sets legally binding obligations on governments to ensure that AI systems are designed, developed, and used in ways that uphold human rights, democratic values, and the rule of law. It is the first global treaty to explicitly center AI regulation around the Universal Declaration of Human Rights and the European Convention on Human Rights.

Key features of the Convention include:

  • Human oversight and accountability requirements
  • Transparent and explainable AI system standards
  • Risk-based approaches to public sector AI deployment
  • Explicit prohibitions against AI systems that pose ‘unacceptable risk’ to human dignity
  • Provisions for international cooperation and mutual legal assistance


The Convention does not mandate technical standards but obligates states to establish legal and institutional frameworks ensuring rights-compliant AI. It applies to both public and private sector actors when they carry out public functions or affect fundamental rights.

European officials emphasize that the treaty is complementary to the EU’s Artificial Intelligence Act, which focuses on market regulation. “The Convention provides the ethical and democratic foundations,” said Marija Pejčinović Burić, Secretary General of the Council of Europe. “The AI Act builds on those foundations to govern the market.”

Notably, the Convention remains open to ratification by non-European countries. The United States participated in negotiations and has expressed interest in aligning domestic principles, though it is unlikely to ratify the treaty in the near term. African and Latin American states have also been invited to consider accession.

The treaty includes mechanisms for peer review and a Conference of the Parties to monitor implementation. Civil society organizations, including Access Now and AlgorithmWatch, praised the treaty’s inclusion of transparency and accountability principles, but called for stronger enforcement tools and more explicit red lines on biometric surveillance and predictive policing.

Ratification processes are now underway across Europe. The treaty will enter into force once five states ratify it, including at least three Council of Europe members. As of July 2025, France, Germany, and the Netherlands have ratified, with others expected to follow by the end of the year.

🔗 Sources:

  • [Council of Europe AI Treaty: https://www.coe.int/en/web/artificial-intelligence/convention](https://www.coe.int/en/web/artificial-intelligence/convention)
  • [Press release, May 2024: https://www.coe.int/en/web/portal/-/council-of-europe-adopts-world-s-first-binding-international-treaty-on-artificial-intelligence]
  • (https://www.coe.int/en/web/portal/-/council-of-europe-adopts-world-s-first-binding-international-treaty-on-artificial-intelligence)

DOJ’s Antitrust Focus on Big Tech & AI

The U.S. Department of Justice (DOJ) is intensifying its antitrust scrutiny of Big Tech’s involvement in artificial intelligence, signaling a new phase in the federal government’s effort to regulate digital power. With AI now considered a critical infrastructure technology, regulators are expressing concern over market concentration, vertical integration, and potential abuses of dominance by a small handful of firms.

In early 2025, the DOJ’s Antitrust Division confirmed it had launched investigations into the partnerships and investment structures linking leading AI model developers with cloud providers and platform gatekeepers. These include scrutiny of Microsoft’s ties with OpenAI, Google’s partnerships with Anthropic and DeepMind, and Amazon’s investments in AI startups integrated into AWS services.

“When a few firms control the compute, the capital, and the customer access, we have to ask whether competition is being distorted at every layer of the stack,” said Assistant Attorney General Jonathan Kanter. “We will not allow the AI revolution to replicate the monopolies of the platform era.”

The DOJ is reportedly examining:

  • Exclusive data-sharing agreements
  • Cloud credits tied to platform loyalty
  • Preferential treatment in app stores and APIs
  • Bundled AI services in productivity software

Kanter’s division has already signaled it may challenge existing deals that involve preferential infrastructure access in exchange for model equity, such as Microsoft’s arrangement with OpenAI. Meanwhile, the Federal Trade Commission (FTC) is also pursuing parallel investigations, focusing on deceptive AI advertising and consumer harm.

The renewed antitrust agenda has received bipartisan support in Congress, with both Republican and Democratic lawmakers calling for greater oversight of how tech giants shape AI markets. A bipartisan AI Competition Act introduced in the Senate proposes stricter reporting requirements and sunset clauses for vertically integrated AI services.

Critics argue that regulators risk stifling innovation or delaying beneficial AI deployment. “Heavy-handed interventions could backfire,” said Henry Olson, policy director at the American Enterprise Institute. “But transparency and fair access are legitimate concerns.”

Advocacy groups like the Center for Humane Technology and Open Markets Institute have welcomed the DOJ’s stance. “AI concentration is not just an economic risk — it’s a democratic one,” argued Sarah Miller of the American Economic Liberties Project.

These antitrust actions mark a significant expansion of the Biden administration’s approach to tech regulation. With the 2026 elections looming, the outcomes of these cases could shape both the structure of the AI industry and the political debate over tech accountability.

🔗 Sources:

 

EU AI Gigafactories

The European Union is investing in a new generation of ‘AI Gigafactories’—massive facilities designed to boost regional capacity in training, testing, and deploying large-scale artificial intelligence models. These sites, modeled on semiconductor and battery gigafactories, aim to provide shared infrastructure for European companies, universities, and public institutions seeking alternatives to reliance on American and Chinese AI systems.

The initiative is part of the EU’s Digital Decade strategy and backed by the European Investment Bank, with initial funding of €2.5 billion approved in 2025. Construction has already begun on three flagship sites in Germany, France, and the Netherlands, with a fourth proposed in Central Europe to ensure balanced access across the bloc.

The gigafactories are envisioned as national and cross-border hubs for:

  • High-performance computing (HPC) clusters tailored for AI workloads
  • Secure and sovereign cloud infrastructure for model training
  • Datasets aligned with EU data protection and multilingual priorities
  • Sandboxes for regulatory testing under the EU AI Act

“These facilities will be the backbone of a new sovereign AI ecosystem,” said Margrethe Vestager, Executive Vice President of the European Commission. “They will give European innovators access to compute and data resources they currently lack.”

The move comes amid growing concerns that European firms—especially startups and public research institutions—cannot compete with U.S. tech giants who dominate compute access. According to a 2024 report by the European AI Observatory, 84% of large model development in Europe relied on infrastructure based outside the continent.

Civil society groups have called on the EU to ensure the gigafactories uphold sustainability, transparency, and fair access. “If we’re building AI at scale, it must be green and equitable,” said Clara Boucher of the Green Tech Alliance. The Commission has pledged that all gigafactories will meet EU climate goals and be powered by renewable energy.

The initiative is also intended to anchor Europe’s competitiveness in foundation model development. Several consortia—comprising startups, universities, and state-backed labs—are expected to bid for AI project slots within the facilities starting in early 2026.

“This is a bold step, but a necessary one,” said Jan Kowalski of the European AI Association. “If Europe wants to set global AI norms, we must have our own AI engines.”

🔗 Sources:

EU AI Timeline Enforced, Pushback from Businesses

The European Commission has confirmed it will enforce the timeline of the EU Artificial Intelligence Act, despite mounting calls from industry leaders to delay implementation. The landmark regulation, adopted in 2024, sets a comprehensive legal framework for AI systems operating within the European Union, and is expected to come into force gradually starting in August 2025.

Tech executives from across the continent — including leaders from Airbus, Siemens, and SAP — have signed a joint letter warning that the law’s complexity and strict risk-based classification could hinder innovation and create excessive compliance burdens. “Europe risks falling behind in global AI development,” the letter stated, calling for a 12-month pause on enforcement to give companies more time to prepare.

The Commission rejected the proposal, citing extensive consultation and transitional measures already built into the regulation. “We believe the timeline is appropriate and proportionate,” said Internal Market Commissioner Thierry Breton. “We are providing clarity, legal certainty, and a fair playing field for responsible AI.”

The AI Act categorizes systems into four risk levels — unacceptable, high-risk, limited, and minimal — and imposes obligations based on those categories. High-risk systems, such as AI used in employment, healthcare, and law enforcement, must comply with strict data governance, transparency, and human oversight requirements.

Developers of general-purpose AI (GPAI) models — like large language models — are also subject to new transparency and safety provisions under a separate code of practice, which is still under negotiation and expected to be released by the end of 2025. The phased approach will see full compliance for GPAI developers and high-risk AI providers by August 2026.

Startups and small businesses have raised concerns about access to compliance tools and regulatory guidance. The European Commission has pledged to roll out a technical sandbox and funding support to help SMEs adapt.

Meanwhile, civil society groups have welcomed the Commission’s resolve. “The tech industry always asks for more time,” said Sarah Chander of European Digital Rights (EDRi). “But people affected by biased algorithms and opaque systems can’t afford to wait another year for safeguards.”

The European AI Office, set to be fully operational by early 2026, will oversee enforcement, coordinate with national authorities, and support consistent application of the Act across member states.

Analysts say the outcome will shape the global trajectory of AI regulation. With the U.S., UK, and China watching closely, the EU’s decision to stay the course could either position it as a global leader in trustworthy AI — or weigh down its tech sector in regulatory complexity.

G7 Hiroshima AI Process

The G7 Hiroshima AI Process, initiated during the 2023 summit in Japan, has emerged as one of the most influential global platforms for coordinating democratic approaches to artificial intelligence governance. Backed by the leaders of Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union, the initiative aims to promote shared values and responsible AI practices among advanced economies.

At the heart of the Hiroshima Process is a voluntary **Code of Conduct for Advanced AI Systems**, published in late 2024. The code outlines 11 principles that signatory governments and tech companies are encouraged to implement, including fairness, transparency, accountability, data privacy, and protections against manipulation or misuse.

“This process is about proving that democracy can still lead in the age of AI,” said a G7 communique issued during the summit. “Our shared values are our strength — and they must be reflected in how these technologies are governed.”

The Code of Conduct includes specific guidance on:

  • Testing and evaluation of high-risk AI models
  • Disclosure of training data sources
  • Red-teaming and safety checks before public deployment
  • Reporting obligations for developers and deployers

While the framework is non-binding, it has already influenced national policies. For instance, the UK’s Frontier AI Taskforce and the U.S. Executive Order on AI both echo Hiroshima principles. Similarly, the European Commission is integrating the code’s recommendations into guidance for the EU AI Act’s rollout.

Industry response has been mixed. Microsoft, IBM, and Google have voiced support and committed to elements of the code. However, civil liberties groups like the Electronic Frontier Foundation have warned that the voluntary approach may not offer sufficient protections in areas like facial recognition or algorithmic discrimination.

Beyond regulation, the Hiroshima AI Process includes a knowledge-sharing platform where G7 states exchange best practices on AI oversight, auditing, and incident response. It also supports international research collaboration, with Japan proposing the establishment of an AI Safety Research Hub in Tokyo in partnership with the OECD.

Looking forward, the G7 countries plan to review the code’s implementation in 2026 and expand engagement with Global South partners. The goal is to elevate the Hiroshima Process as a benchmark for inclusive and democratic AI governance at a time when geopolitical competition around tech standards is intensifying.

🔗 Sources:

European Union, AI Convention – Goals, Parameters, Ambitions

The European Union has formally joined the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights — the world’s first legally binding international treaty on AI. Signed in Strasbourg in September 2024 and ratified by the EU in early 2025, the convention represents a landmark moment for global digital governance.

The agreement is the culmination of nearly three years of negotiations between 46 Council of Europe member states, the EU, and observers including the United States, Canada, and Japan. Its stated goal is to ensure that the development and use of artificial intelligence systems uphold democratic values, human rights, and the rule of law.

“The AI Convention places fundamental rights at the core of how we govern algorithmic systems across borders,” said Christel Schaldemose, a Danish MEP who participated in the European Parliament’s digital affairs committee. “It’s a foundational baseline that builds trust — not just in the EU, but globally.”

The Convention outlines four core obligations for signatories:

  1. Legal safeguards to prevent AI-related discrimination and harm.
  2. Transparency and explainability of high-risk AI systems.
  3. Impact assessments on human rights and democratic freedoms.
  4. Public oversight and access to remedies in cases of AI misuse.

Crucially, the treaty covers both public and private actors — meaning that even companies developing AI applications can be held accountable under international norms if they operate in participating states. Enforcement, however, remains with national authorities and legal systems.

The EU’s endorsement of the Convention complements its own AI Act, passed in 2024. While the AI Act provides a regulatory framework within the Union, the Convention establishes an interoperable legal floor for AI governance across Europe and beyond.

Civil society and advocacy organizations have cautiously welcomed the agreement. Amnesty International praised its human rights framing, though criticized a lack of mandatory bans on biometric surveillance. Meanwhile, business groups have expressed concerns over regulatory fragmentation.

The European Commission has committed to supporting the Convention’s implementation by funding digital rights training, promoting algorithmic transparency projects, and ensuring that SMEs receive technical guidance.

Experts say the treaty could become a model for similar pacts elsewhere — especially in Africa and Latin America, where regional AI regulation is still in its early stages.

🔗 Sources: