Quethos Sentinel | EU AI Act Compliance & Audit Engine
Sentinel scans your GitHub, GitLab, and Bitbucket repositories, classifies AI components against EU AI Act obligations (Article 5, Annex III, GPAI), and generates comprehensive compliance reports automatically. Identify prohibited AI, high-risk systems, and limited-risk applications before the August 2026 enforcement deadline.
Your AI stack may carry up to €35M in regulatory liability. August 2026 enforcement is live for high-risk AI systems. Sentinel delivers zero-installation compliance telemetry from your existing codebase.
Start Free Audit
View Pricing
EU AI Act Risk Tiers: What Category Is Your AI System?
The EU AI Act classifies AI systems into four tiers. Sentinel automatically identifies which tier applies to every component in your codebase.
Prohibited AI Systems (Article 5) — Banned from August 2025
Applies to: Emotion recognition in workplaces and educational institutions, social scoring systems, real-time remote biometric identification in public spaces, subliminal manipulation systems, AI that exploits vulnerabilities of specific groups.
Obligations: Immediate withdrawal of functionality. No compliance path — architectural redesign required. Full halt of data processing activities.
Sentinel detection: Flags offending files, identifies the specific prohibited practice, and suggests alternative architectures that achieve business goals without violating Article 5.
High-Risk AI Systems (Annex III) — Deadline August 2026
Applies to: AI used in employment and recruitment (CV screening, performance assessment), credit scoring and insurance risk, educational assessment, law enforcement and judicial decisions, critical infrastructure management, biometric categorization systems.
Obligations: Establish a formal risk management system with continuous monitoring. Implement comprehensive audit logging. Design in human oversight mechanisms. Create and maintain technical documentation conformity assessment files.
Sentinel detection: Identifies missing human-in-the-loop logic, absent audit trail implementations, and generates draft technical documentation templates.
General-Purpose AI (GPAI) — Articles 51–55
Applies to: Foundation models, large language models (LLMs), diffusion models, and general-purpose generative AI systems with systemic risk potential (training compute above 10^25 FLOPs).
Obligations: Public transparency obligations including model cards. Copyright law compliance and training data reporting. Systemic risk assessment and mitigation plan for large-scale models.
Sentinel detection: Maps foundation model API calls (OpenAI, Anthropic, Google) to transparency requirements and identifies missing disclosure mechanisms.
Limited Risk AI (Article 50) — Transparency Obligations
Applies to: Chatbots and conversational AI, deepfake generation systems, AI-generated content systems, emotion recognition systems used outside prohibited contexts.
Obligations: Mandatory disclosure to users that they are interacting with an AI system. Watermarking and labeling of all AI-generated content.
Sentinel detection: Scans user interface code for mandatory AI disclosure strings and verifies watermarking or content labeling logic is present.
EU AI Act Insights & Updates
Expert analysis, compliance guides, and regulatory updates from the Quethos Sentinel team.
Read all EU AI Act articles →