The EU AI Act: A Practical Classification Guide for AI System Builders
The EU AI Act is live. Parts have been enforceable since February 2025. This guide answers two questions: Where does your system fall? And what do you need to do about it?

Prof. Dr. Ing. Pascal Laube

Key Takeaways
- The AI Act uses four risk tiers. Most AI systems fall into minimal risk and face no product-specific obligations.
- The use case determines the risk tier, not the technology. Same algorithm, different application, different category.
- Prohibited practices have been enforceable since February 2025. High-risk obligations apply from August 2026.
- Pure mathematical optimization is not an AI system under the Act. ML components that provide genuine inference bring a system into scope.
- There is no official 'EU AI Act Ready' certification. Readiness means documented, traceable, proportionate compliance.
This article is for general informational purposes only and does not constitute legal advice. Despite careful research, we assume no liability for the accuracy, completeness, or timeliness of the content. For specific legal questions regarding your AI systems, please consult a qualified attorney. This article reflects the state of Regulation (EU) 2024/1689 as of March 21, 2026.
The EU AI Act (Regulation 2024/1689) is live. Parts of it have been enforceable since February 2025. The rest lands in August 2026 and August 2027. If you build, deploy, or integrate AI systems in the EU market, it applies to you.
This article won't rehash the political history or debate whether regulation is good. It answers two questions: Where does your system fall? And what do you need to do about it?
The Four Risk Categories
The AI Act sorts AI systems into four tiers. The labels 'limited risk' and 'minimal risk' don't actually appear as headings in the regulation text. They're practitioner shorthand for how the obligations shake out in practice. What matters is which articles apply to your system.

Unacceptable Risk (Banned)
Article 5. In force since February 2, 2025. These systems are prohibited outright. You can't build them, sell them, or use them in the EU.
If your system does any of these things, stop: Social scoring by governments. Subliminal manipulation that distorts behavior and causes harm. Exploitation of vulnerability based on age, disability, or economic situation. Predictive policing based solely on profiling with no objective facts. Untargeted facial recognition scraping from the internet or CCTV. Emotion recognition at work or school (medical and safety exceptions exist). Biometric categorization by protected characteristics. Real-time facial recognition in public spaces for law enforcement (three narrow exceptions: kidnapping victims, imminent terrorist threats, suspects of serious crimes with judicial authorization).
This isn't theory. These prohibitions have been legally binding for over a year. Penalties: up to 35 million euros or 7% of global annual turnover. Whichever is higher.
High Risk (Strict Obligations)
Articles 6, 8 through 17, Annex III. Applies from August 2, 2026.
Your system is high-risk if it falls into one of two buckets. Bucket 1 (Annex I): It's a safety component of a product covered by EU product safety legislation (medical devices, machinery, automotive, aviation) AND that product requires third-party conformity assessment. These obligations kick in August 2, 2027. Bucket 2 (Annex III): It's listed in Annex III. These obligations kick in August 2, 2026.
The complete Annex III list covers eight categories: 1) Biometrics: remote identification, categorization by sensitive attributes, emotion recognition. 2) Critical infrastructure: AI as safety components in managing digital infrastructure, road traffic, water, gas, heating, or electricity. 3) Education: admission decisions, exam evaluation, student monitoring. 4) Employment: CV screening, candidate ranking, hiring decisions, performance monitoring, promotion, termination. 5) Essential services: credit scoring for individuals, public benefits eligibility, insurance risk assessment, emergency dispatch prioritization. 6) Law enforcement: victim risk assessment, evidence evaluation, reoffending risk, criminal profiling. 7) Migration and border control: risk assessment, asylum/visa applications, border identification. 8) Justice and democracy: AI assisting judges, AI influencing election outcomes.
There's an escape clause. Article 6(3) says an Annex III system is NOT high-risk if it only does narrow procedural tasks, improves a previously completed human activity, detects patterns without replacing human judgment, or handles preparatory work. But systems that profile individuals can never use this escape.
Penalties for high-risk violations: up to 15 million euros or 3% of global turnover.
Limited Risk (Transparency Only)
Article 50. Applies from August 2, 2026. Your system lands here if it interacts directly with people (chatbots, virtual assistants), generates synthetic content (images, audio, video, text), recognizes emotions or categorizes people biometrically (outside prohibited contexts), or generates deepfakes.
The obligation: tell people what they're dealing with. Users must know they're talking to AI. AI-generated content must be machine-readable marked as artificial. Deepfakes must be disclosed. No conformity assessment. No risk management system. Just transparency.
Minimal Risk (No Product-Specific Obligations)
Spam filters. Product recommendations. Video game AI. Inventory optimization. Internal search tools. If your AI system doesn't fit any of the categories above, you're here. No mandatory requirements. The vast majority of AI systems in production fall into this category.
One universal obligation applies across all tiers, including minimal risk: Article 4 requires that anyone providing or deploying AI systems ensures their staff has sufficient AI literacy.
EUR 35M
max fine for prohibited practices
or 7% of global turnover
EUR 15M
max fine for high-risk violations
or 3% of global turnover
Aug 2026
high-risk obligations begin
5 months away
Three Questions to Classify Your System
You don't need a lawyer to get a first answer. Work through these in order.

Question 1: Is It an AI System?
Article 3(1) defines an AI system as a machine-based system that operates with some autonomy and infers how to generate outputs from its inputs. The key word is 'infers.' Recital 12 clarifies: the inference must 'transcend basic data processing by enabling learning, reasoning or modelling.'
Yes, it's an AI system: Machine learning models (supervised, unsupervised, reinforcement learning), deep learning, neural networks, expert systems with genuine symbolic inference. A frozen ML model that never updates after deployment still counts.
No, it's not an AI system: Rule-based systems where humans define all the rules. Pure mathematical optimization (linear programming, mixed-integer programming, constraint programming). Statistical dashboards. Baseline prediction systems. The Commission Guidelines (C(2025) 5053) are explicit: ML models that merely accelerate or approximate established mathematical optimization methods fall outside the definition.
If your system isn't an AI system, the AI Act doesn't apply. You're done.
Question 2: Is It Prohibited?
Check your system against the eight Article 5 categories listed above. If it matches, you can't operate it in the EU. This has been enforceable since February 2, 2025. Most commercial AI systems won't be prohibited. But check anyway. Emotion detection features in HR tools and workspace monitoring products have drawn scrutiny.
Question 3: Does the Use Case Fall Under Annex III?
Match your system's intended purpose against the eight Annex III categories. The use case determines the tier, not the technology. The same anomaly detection algorithm is minimal risk when used for network monitoring and potentially high-risk when used to decide whether to freeze someone's bank account.
If you think your system is on the border, the regulation requires you to document your classification decision before market placement (Article 6(4)). You need to justify why you believe a system that touches an Annex III area is not high-risk. That justification must be registered in the EU database.
What High-Risk Compliance Requires
If your system is high-risk, here's the full list. This is not optional.

Risk management system (Article 9): An ongoing, iterative process. Identify, estimate, evaluate, and mitigate risks to health, safety, and fundamental rights. Must include testing before market placement. Must consider impact on minors and vulnerable groups.
Data governance (Article 10): Training, validation, and testing datasets must be relevant, representative, and as error-free as possible. Documented procedures for annotation, labeling, cleaning, enrichment. Bias examination required.
Technical documentation (Article 11, Annex IV): Nine categories covering system description, development methods, architecture, data requirements, human oversight measures, testing results, cybersecurity, and performance metrics. Completed before market placement. Kept current. Retained 10 years.
Logging (Article 12): Automatic event recording throughout system lifetime. For biometric identification: log times, databases used, matched data, and identity of verifying personnel.
Transparency to deployers (Article 13): Instructions of use including accuracy metrics, known limitations, performance for specific population groups, data requirements, human oversight measures.
Human oversight (Article 14): The system must allow humans to understand capabilities and limitations, recognize automation bias, correctly interpret outputs, override or reverse decisions, and stop the system.
Accuracy, resilience, cybersecurity (Article 15): Appropriate levels throughout the lifecycle. Must address data poisoning, model poisoning, adversarial examples, and confidentiality attacks.
Quality management system (Article 17): Thirteen mandatory components including regulatory compliance strategy, design controls, testing procedures, data management, risk management, post-market monitoring, incident reporting, resource management, and accountability framework.
Conformity assessment (Article 43): For Annex III categories 2 through 8: self-assessment only. No external auditor required. For biometric identification (category 1): a notified body may be required. Most high-risk AI systems don't need external certification.
Plus: EU Declaration of Conformity (Article 47), CE marking (Article 48), EU database registration before market placement (Article 49), and post-market monitoring (Article 72).
If You're a Deployer, Not a Provider
You didn't build the system, you use it. Your obligations under Article 26: follow provider instructions, assign competent human oversight, ensure input data quality, retain logs at least six months, report serious incidents, inform workers before deploying AI that affects them, inform individuals subject to high-risk AI. If you're a public authority, a private entity providing public services (healthcare, education, housing), or a deployer of credit scoring or insurance risk-scoring AI: conduct a Fundamental Rights Impact Assessment (Article 27) before first use.
One trap: if you substantially modify a high-risk system or deploy it under your own brand, you become the provider. All provider obligations then apply to you.
Timeline

Feb 2025
prohibited practices + AI literacy
already in force
Aug 2025
GPAI model obligations
already in force
Aug 2026
high-risk + transparency
5 months away
Aug 2027
Annex I products
17 months away
A real-world complication: the European Parliament's IMCO and LIBE committees voted 101 to 9 on March 18, 2026 to support the Digital Omnibus on AI, which would push the high-risk deadline from August 2026 to December 2027. As of this writing, this hasn't passed plenary vote. The original August 2026 deadline remains legally binding until an amending regulation is published.
Separately: the Commission missed its own February 2026 deadline for publishing guidance on high-risk AI classification. That's a signal that enforcement of the August 2026 date may be bumpy, but it doesn't change your legal obligation.
Plan for August 2026. Adjust if the law changes. Don't bet on delays that aren't signed yet.
Penalties

Prohibited AI practices: up to EUR 35 million or 7% of global annual turnover, whichever is higher. High-risk non-compliance: up to EUR 15 million or 3%. Incorrect information to authorities: up to EUR 7.5 million or 1%.
For SMEs and startups, the rule flips: whichever is lower applies. A startup with EUR 3 million in revenue facing a prohibited-AI violation would pay at most EUR 210,000 (7% of 3M), not EUR 35 million.
National competent authorities enforce AI system violations. The EU AI Office handles GPAI model violations exclusively (up to EUR 15M or 3%, effective August 2026). As of March 2026, no public fine has been issued anywhere in the EU. Only 8 of 27 member states have designated national contact points, and Finland is the only one with a confirmed active supervisor. Investigations are reportedly underway, but the enforcement machinery is still warming up.
Where Optimization Software Falls

Pure mathematical optimization is not an AI system. The Commission Guidelines (C(2025) 5053, paragraph 42) are explicit: systems used to improve mathematical optimization, including linear programming, mixed-integer programming, and constraint programming, 'fall outside the scope of the AI system definition' because they 'do not transcend basic data processing.'
This exclusion goes further than you might expect. Even ML models that accelerate or approximate optimization methods are excluded. A neural network predicting warm-start values to speed up your MIP solver? Not an AI system. An ML model approximating a computationally expensive objective function? Still not an AI system. The Commission's test: if the ML exists to improve computational performance rather than to make intelligent decisions, it's outside scope.
The line is crossed when ML provides genuine inference. A demand forecasting model that learns patterns from historical data and produces predictions feeding into the optimizer likely qualifies as an AI system. The optimizer itself doesn't. But the forecasting model does, because it infers beyond basic data processing.
A common trap: logistic regression. When used to accelerate a solver, it's excluded. When used as a credit risk scorer, it IS an AI system. Same technique, different function, different answer.
Where kint Fits
kint combines mathematical solvers with ML components. The solver part (LP, MIP, constraint programming) falls outside the AI Act entirely. The ML components (demand prediction, surrogate models, learned heuristics) bring specific features into scope when they provide genuine inference capability.
For kint's typical use cases (logistics optimization, production scheduling, pricing) the ML components are likely minimal risk. No Annex III category applies. For OEM partners integrating kint into products that touch Annex III areas (credit decisions, insurance pricing, workforce management with individual-level consequences), the risk classification follows the use case. kint supports the documentation, logging, and transparency requirements that high-risk classification demands.
What 'EU AI Act Ready' Actually Means
There's no official certification. No badge. No seal from Brussels. What you can do is demonstrate compliance through concrete, verifiable measures.
Your AI systems are inventoried. Each is classified with documented reasoning. Prohibited systems are gone. High-risk systems have risk management, technical documentation, data governance, human oversight, conformity assessment, CE marking, and database registration. Limited-risk systems have transparency disclosures. Your staff has AI literacy training. Deployer obligations are covered.
That's what readiness looks like. Documented, traceable, and proportionate to your risk tier.
What to Do Next
Step 1: Build your AI inventory. List every system that might qualify as an AI system. For each: does it use ML, deep learning, or knowledge-based inference that goes beyond basic data processing? If yes, it's likely an AI system. If it only executes human-defined rules or performs mathematical optimization, it probably isn't.
Step 2: Classify each system. Run each through the three-question framework. Document the result. Pay special attention to systems that touch Annex III areas. If classification is ambiguous, err on the side of caution and document your reasoning.
Step 3: Start compliance work for high-risk systems. The conformity assessment for most Annex III systems is self-assessment. No external auditor. But documentation, risk management, and data governance take months to implement properly. Don't wait for final Commission guidance on benchmarks. Document what you have. Revise when guidance arrives.
Disclaimer
The contents of this article are for general informational purposes only and do not constitute legal advice (keine Rechtsberatung im Sinne des RDG). The information provided does not replace individual legal counsel. Despite careful research, we assume no liability for the accuracy, completeness, or timeliness of the content. Regulations may change, and the interpretation of specific provisions may differ depending on your jurisdiction and circumstances. Use of this content is at your own risk. For questions regarding the classification and compliance of your specific AI systems, please consult a qualified attorney specializing in EU technology law.
The Digital Omnibus on AI, currently under legislative review, may alter the application dates described in this article. The deadlines stated reflect the original Regulation (EU) 2024/1689 as published in the Official Journal. Sources: EUR-Lex (eur-lex.europa.eu), European Commission AI Act Service Desk (ai-act-service-desk.ec.europa.eu), Commission Guidelines C(2025) 5053 final.

