In early 2025, the Organisation for Economic Co-operation and Development (OECD) unveiled its Framework for the Classification of AI Systems, a landmark effort to develop a common, globally applicable taxonomy for artificial intelligence. The goal is to provide a structured approach to understanding the functions, risks, and capabilities of AI systems, helping policymakers, developers, and regulators navigate the increasingly complex AI landscape.
The framework builds upon the OECD’s existing AI Principles, adopted in 2019, and aligns with the organization’s broader work on trustworthy AI, digital policy, and algorithmic accountability. It also responds to growing calls from the G7, EU, and UN bodies for interoperable frameworks that can facilitate regulatory coordination across jurisdictions.
Key dimensions of the classification framework include:
- Context of use: Sector and application (e.g., healthcare, finance, justice)
- Impact: Potential for harm or benefit to individuals and society
- Autonomy: Level of decision-making independence from human oversight
- Data sensitivity: Nature and volume of data used (e.g., biometric, behavioral)
- Adaptability: Static vs. learning (self-updating) systems
- Transparency: Level of explainability and auditability
Each AI system can be classified using this multidimensional model, allowing stakeholders to assess risk, define responsibilities, and guide the appropriate legal or ethical response. For example, an adaptive, opaque AI system used in criminal sentencing would fall into a higher-risk category than a static, explainable model used for inventory tracking.
OECD officials emphasize that the framework is not a regulation but a tool to promote international coherence. “It’s about providing common language and comparability across sectors and countries,” explained Karine Perset, head of the OECD AI Policy Observatory. “We want to avoid fragmentation and duplication.”
The framework has already gained support from several key players, including Canada, Japan, and the European Commission. It has also been cited in discussions at the G7 Hiroshima AI Process, where countries agreed to explore shared risk classification methods.
Critics argue the voluntary nature of the framework may limit its impact, but others see it as a necessary prelude to enforceable norms. “Classification is the foundation for any meaningful governance,” said Michael Veale, a digital rights scholar. “You can’t regulate what you don’t understand.”
The OECD plans to update the framework regularly and integrate it into future workstreams on AI testing, certification, and public procurement. A technical guidance document and pilot platform for developers are expected by the end of 2025.
🔗 Sources:
– [OECD AI Policy Observatory: https://oecd.ai](https://oecd.ai)
– [OECD Framework Announcement: https://www.oecd.org/digital/ai](https://www.oecd.org/digital/ai)







