The EU AI Act classifies AI systems into four risk categories: prohibited, high-risk, limited risk, and minimal risk. Correct classification determines compliance obligations, conformity assessment procedures, and regulatory requirements. This guide explains how to classify AI systems according to Article 6 and Annex III before the August 2, 2026 deadline.
Table of Contents
- What is EU AI Act classification?
- What are the four risk categories in the EU AI Act?
- What AI systems are prohibited under the EU AI Act?
- What AI systems are classified as high-risk?
- What are the Annex III high-risk AI system categories?
- What are limited risk AI systems?
- What are minimal risk AI systems?
- How to classify your AI system step-by-step?
- What documentation is required for classification?
What is EU AI Act classification?
EU AI Act classification is the process of categorizing AI systems according to their risk level under Regulation (EU) 2024/1689. Classification determines which compliance obligations apply, what conformity assessment procedures must be followed, and what documentation is required.
The EU AI Act uses a risk-based approach, meaning:
- Higher Risk = Stricter Requirements: Prohibited and high-risk AI systems face the most stringent obligations
- Lower Risk = Lighter Requirements: Limited and minimal risk AI systems have fewer compliance obligations
- Classification is Mandatory: Providers must correctly classify their AI systems before placing them on the market
- Classification Affects Everything: Risk level determines technical documentation, conformity assessment, post-market monitoring, and registration requirements
Source: European Commission - AI Act Official Page
What are the four risk categories in the EU AI Act?
The EU AI Act establishes four risk categories for AI systems, each with different compliance requirements and obligations.
What AI systems are prohibited under the EU AI Act?
Article 5 of the EU AI Act prohibits AI systems that pose unacceptable risks to safety, fundamental rights, and democratic values. These systems cannot be placed on the market, put into service, or used within the EU.
What AI systems are classified as high-risk?
High-risk AI systems are subject to the most stringent compliance requirements under the EU AI Act. An AI system is classified as high-risk if it meets one of two criteria:
Criterion 1: Listed in Annex III
AI systems used in specific use cases listed in Annex III are automatically classified as high-risk, regardless of their intended purpose.
Criterion 2: Used as Safety Component
AI systems used as safety components of products covered by EU harmonization legislation listed in Annex II are also classified as high-risk.
Source: EU AI Act - Article 6 and Annex III
What are the Annex III high-risk AI system categories?
Annex III lists eight categories of high-risk AI systems. If your AI system falls into any of these categories, it is automatically classified as high-risk.
What are limited risk AI systems?
Limited risk AI systems are subject to specific transparency obligations under Articles 50 and 52 of the EU AI Act. These systems must inform users that they are interacting with an AI system.
What are minimal risk AI systems?
Minimal risk AI systems are all AI systems that do not fall into the prohibited, high-risk, or limited risk categories. These systems have no specific compliance obligations under the EU AI Act, but providers should still follow general principles of trustworthy AI.
Examples of minimal risk AI systems include:
- AI-powered spam filters
- Recommendation systems for content (unless used in high-risk contexts)
- Video games with AI features
- AI-powered translation tools (unless used in high-risk contexts)
- AI chatbots for customer service (unless used in high-risk contexts)
How to classify your AI system step-by-step?
Follow this systematic process to correctly classify your AI system according to the EU AI Act.
Step 1: Check for Prohibited Practices
- Review Article 5 prohibited practices
- Determine if your AI system falls into any prohibited category
- If yes, the system is prohibited and cannot be placed on the market
- If no, proceed to Step 2
Step 2: Check Annex III High-Risk Categories
- Review Annex III categories systematically
- Determine if your AI system is used in any Annex III use case
- If yes, the system is high-risk
- If no, proceed to Step 3
Step 3: Check Safety Component Classification
- Determine if your AI system is used as a safety component
- Check if the product is covered by Annex II harmonization legislation
- If yes, the system is high-risk
- If no, proceed to Step 4
Step 4: Check Limited Risk Categories
- Review Article 50 and 52 limited risk categories
- Determine if your AI system requires transparency obligations
- If yes, the system is limited risk
- If no, the system is minimal risk
Step 5: Document Classification Decision
- Document the classification rationale
- Record which criteria were applied
- Maintain classification documentation for compliance purposes
What documentation is required for classification?
Providers must document their AI system classification decisions. This documentation should include:
- Classification Result: The risk category assigned to the AI system
- Classification Rationale: Explanation of why the system was classified in this category
- Criteria Applied: Which articles, annexes, or criteria were used for classification
- Use Case Description: Detailed description of how the AI system is intended to be used
- Risk Assessment: Assessment of risks posed by the AI system
For high-risk AI systems, classification documentation must be included in technical documentation (Article 11) and used for conformity assessment procedures.
Next Steps
Correct classification is the foundation of EU AI Act compliance. Organizations should classify their AI systems now to determine compliance obligations and prepare for the August 2, 2026 deadline.
Need Help with AI System Classification?
ActProof.ai automates EU AI Act compliance through AI-BOM generation, Policy-as-Code validation, bias monitoring, and automated documentation. Our platform helps organizations correctly classify AI systems and meet the 2026 deadline. Contact us to learn how we can help.
Start Free Trial