Start here: classify fast
If you're trying to answer: does this system trigger EU AI Act prohibited / high-risk / transparency obligations?
Use the tool:
- EU AI Act Risk Classifier: /eu-ai-act
Mermaid source
flowchart LR\n A[Paste system spec] --> B[Fact extraction]\n B --> C[Deterministic evaluator]\n C --> D[Audit-style report]\n D --> E[Follow-ups only if needed]
FAQ
What is the EU AI Act trying to classify?
At a high level, many teams first need a fast triage:
- Prohibited practices (stop-and-review category)
- High-risk (Annex III signal)
- Transparency obligations (Article 50 signals)
- Unknown (insufficient facts → follow-up questions)
What does “deterministic” mean here?
It means the output is based on explicit rules and explicit facts (yes/no/unknown), not probabilistic text guesses.
What should I do first as a product team?
- Create a short system spec describing purpose, users, and what decisions it affects.
- Run it through the classifier: /eu-ai-act
- If result is UNKNOWN, answer the follow-ups until classification stabilizes.
Is this legal advice?
No. It’s a deterministic decision-support workflow that helps you get to the right review track.
Next guides
- EU AI Act: prohibited practices overview → /policy-guides/eu-ai-act-prohibited
- EU AI Act: Annex III high-risk overview → /policy-guides/eu-ai-act-high-risk
- EU AI Act: transparency obligations overview → /policy-guides/eu-ai-act-transparency