Classification Guide

EU AI Act Classification: How to Classify AI Systems by Risk Level 2026

🇮🇹 Leggi in Italiano

The EU AI Act classifies AI systems into four risk categories: prohibited, high-risk, limited risk, and minimal risk. Correct classification determines compliance obligations, conformity assessment procedures, and regulatory requirements. This guide explains how to classify AI systems according to Article 6 and Annex III before the August 2, 2026 deadline.

Table of Contents

What is EU AI Act classification?

EU AI Act classification is the process of categorizing AI systems according to their risk level under Regulation (EU) 2024/1689. Classification determines which compliance obligations apply, what conformity assessment procedures must be followed, and what documentation is required.

The EU AI Act uses a risk-based approach, meaning:

  • Higher Risk = Stricter Requirements: Prohibited and high-risk AI systems face the most stringent obligations
  • Lower Risk = Lighter Requirements: Limited and minimal risk AI systems have fewer compliance obligations
  • Classification is Mandatory: Providers must correctly classify their AI systems before placing them on the market
  • Classification Affects Everything: Risk level determines technical documentation, conformity assessment, post-market monitoring, and registration requirements

Source: European Commission - AI Act Official Page

What are the four risk categories in the EU AI Act?

The EU AI Act establishes four risk categories for AI systems, each with different compliance requirements and obligations.

Risk Category Description Compliance Requirements Article Reference
Prohibited AI AI systems that pose unacceptable risks and are banned Cannot be placed on the market or put into service Article 5
High-Risk AI AI systems listed in Annex III or used in specific contexts Full compliance: QMS, risk management, technical documentation, conformity assessment Article 6, Annex III
Limited Risk AI AI systems requiring transparency obligations Transparency requirements (Article 13, 50, 52) Article 50, 52
Minimal Risk AI All other AI systems not falling into other categories No specific compliance obligations General provisions

What AI systems are prohibited under the EU AI Act?

Article 5 of the EU AI Act prohibits AI systems that pose unacceptable risks to safety, fundamental rights, and democratic values. These systems cannot be placed on the market, put into service, or used within the EU.

Prohibited Practice Description Article 5 Reference
Subliminal Manipulation AI systems that manipulate persons through subliminal techniques beyond their consciousness Article 5(1)(a)
Exploitation of Vulnerabilities AI systems that exploit vulnerabilities of specific groups due to age, disability, or social situation Article 5(1)(b)
Social Scoring AI systems for social scoring that lead to detrimental treatment of natural persons Article 5(1)(c)
Real-Time Remote Biometric Identification Real-time remote biometric identification in publicly accessible spaces for law enforcement (with limited exceptions) Article 5(1)(d)
Emotion Recognition AI systems that use emotion recognition in workplace and educational institutions Article 5(1)(e)

What AI systems are classified as high-risk?

High-risk AI systems are subject to the most stringent compliance requirements under the EU AI Act. An AI system is classified as high-risk if it meets one of two criteria:

Criterion 1: Listed in Annex III

AI systems used in specific use cases listed in Annex III are automatically classified as high-risk, regardless of their intended purpose.

Criterion 2: Used as Safety Component

AI systems used as safety components of products covered by EU harmonization legislation listed in Annex II are also classified as high-risk.

Source: EU AI Act - Article 6 and Annex III

What are the Annex III high-risk AI system categories?

Annex III lists eight categories of high-risk AI systems. If your AI system falls into any of these categories, it is automatically classified as high-risk.

Category Use Case Annex III Reference
Biometric Identification AI systems intended to be used for biometric identification of natural persons Annex III(1)
Critical Infrastructure AI systems intended to be used as safety components in the management and operation of critical infrastructure Annex III(2)
Education and Training AI systems intended to be used to determine access or admission to educational institutions or to assess students Annex III(3)
Employment and Workers AI systems intended to be used for recruitment, selection, promotion, or termination of workers Annex III(4)
Access to Essential Services AI systems intended to be used to evaluate creditworthiness or establish credit scores Annex III(5)
Law Enforcement AI systems intended to be used by law enforcement authorities for risk assessment, polygraphs, or crime analytics Annex III(6)
Migration and Border Control AI systems intended to be used for migration, asylum, and border control management Annex III(7)
Administration of Justice AI systems intended to be used to assist judicial authorities in researching and interpreting facts and law Annex III(8)

What are limited risk AI systems?

Limited risk AI systems are subject to specific transparency obligations under Articles 50 and 52 of the EU AI Act. These systems must inform users that they are interacting with an AI system.

Limited Risk Category Description Article Reference
AI Systems Interacting with Humans AI systems designed to interact with natural persons Article 50
Emotion Recognition Systems AI systems that detect, recognize, or infer emotions or intentions Article 50
Biometric Categorization AI systems that categorize natural persons based on biometric data Article 52

What are minimal risk AI systems?

Minimal risk AI systems are all AI systems that do not fall into the prohibited, high-risk, or limited risk categories. These systems have no specific compliance obligations under the EU AI Act, but providers should still follow general principles of trustworthy AI.

Examples of minimal risk AI systems include:

  • AI-powered spam filters
  • Recommendation systems for content (unless used in high-risk contexts)
  • Video games with AI features
  • AI-powered translation tools (unless used in high-risk contexts)
  • AI chatbots for customer service (unless used in high-risk contexts)

How to classify your AI system step-by-step?

Follow this systematic process to correctly classify your AI system according to the EU AI Act.

Step 1: Check for Prohibited Practices

  • Review Article 5 prohibited practices
  • Determine if your AI system falls into any prohibited category
  • If yes, the system is prohibited and cannot be placed on the market
  • If no, proceed to Step 2

Step 2: Check Annex III High-Risk Categories

  • Review Annex III categories systematically
  • Determine if your AI system is used in any Annex III use case
  • If yes, the system is high-risk
  • If no, proceed to Step 3

Step 3: Check Safety Component Classification

  • Determine if your AI system is used as a safety component
  • Check if the product is covered by Annex II harmonization legislation
  • If yes, the system is high-risk
  • If no, proceed to Step 4

Step 4: Check Limited Risk Categories

  • Review Article 50 and 52 limited risk categories
  • Determine if your AI system requires transparency obligations
  • If yes, the system is limited risk
  • If no, the system is minimal risk

Step 5: Document Classification Decision

  • Document the classification rationale
  • Record which criteria were applied
  • Maintain classification documentation for compliance purposes

What documentation is required for classification?

Providers must document their AI system classification decisions. This documentation should include:

  • Classification Result: The risk category assigned to the AI system
  • Classification Rationale: Explanation of why the system was classified in this category
  • Criteria Applied: Which articles, annexes, or criteria were used for classification
  • Use Case Description: Detailed description of how the AI system is intended to be used
  • Risk Assessment: Assessment of risks posed by the AI system

For high-risk AI systems, classification documentation must be included in technical documentation (Article 11) and used for conformity assessment procedures.

Next Steps

Correct classification is the foundation of EU AI Act compliance. Organizations should classify their AI systems now to determine compliance obligations and prepare for the August 2, 2026 deadline.

Need Help with AI System Classification?

ActProof.ai automates EU AI Act compliance through AI-BOM generation, Policy-as-Code validation, bias monitoring, and automated documentation. Our platform helps organizations correctly classify AI systems and meet the 2026 deadline. Contact us to learn how we can help.

Start Free Trial

Related Articles

Complete Guide to EU AI Act Compliance: What You Need to Know by 2026

Comprehensive guide covering all aspects of EU AI Act compliance.

EU AI Act Compliance Checklist 2026: Complete Step-by-Step Guide

Complete compliance checklist covering all mandatory requirements.