Quethos Sentinel | EU AI Act Compliance & Audit Engine

Sentinel scans your GitHub, GitLab, and Bitbucket repositories, classifies AI components against EU AI Act obligations (Article 5, Annex III, GPAI), and generates comprehensive compliance reports automatically. Identify prohibited AI, high-risk systems, and limited-risk applications before the August 2026 enforcement deadline.

Your AI stack may carry up to €35M in regulatory liability. August 2026 enforcement is live for high-risk AI systems. Sentinel delivers zero-installation compliance telemetry from your existing codebase.

Start Free Audit View Pricing

EU AI Act Key Compliance Figures

  • €35M — Maximum penalty per EU AI Act violation (or 7% of global annual turnover)
  • August 2025 — Prohibited AI practices ban active (Article 5)
  • August 2026 — Full high-risk AI system compliance deadline (Annex III)
  • Articles 5–55 — Full regulatory scope covered in every Sentinel scan

From Repository to Compliance Report in Minutes

Three automated steps. No infrastructure overhead. No manual review of thousands of files.

  1. Connect Your Repository

    Authorize via GitHub, GitLab, or Bitbucket OAuth. Select the AI platform or codebase you want audited. Sentinel shallow-clones into a zero-trust temporary environment in under 30 seconds.

  2. Automated Risk Classification

    Pattern-matching signals identify AI components across Python, JavaScript, TypeScript, and Jupyter notebooks. Google Gemini LLM analyzes file intent and classifies each component through the EU AI Act's tier system: Prohibited (Article 5), High-Risk (Annex III), GPAI (Articles 51–55), Limited (Article 50), or Minimal Risk.

  3. Compliance Report & Remediation

    Findings are categorized by risk tier, EU AI Act article reference, and mandatory action. Generate GitHub issues with single-click precision. Download a PDF compliance report for legal documentation.

AI Compliance Features That Replace an External Consultant

  • EU AI Act Codebase Scanner

    Walks your entire repository, identifies AI components using INPUT/PROCESSING/OUTPUT signal detection, and batches suspicious files for deep intent analysis. Supports Python, JavaScript, TypeScript, JSX, TSX, and Jupyter notebooks.

  • Risk Tier Classification

    Classifies every AI component through all five EU AI Act tiers with specific article references. Deterministic rules for Article 5 prohibited AI detections. GPAI foundation model identification for Articles 51–55.

  • Biometric & Sensitive Data Detection

    Detects camera and microphone access (getUserMedia, VideoCapture) alongside AI logic. Automatically escalates risk tier when biometric sensor code is found alongside classification or scoring models.

  • Import Dependency Graph Analysis

    Builds cross-language import graphs (Python, JS, TS, Java, Go, Rust) to propagate AI risk across transitive dependencies. An AI model used by a utility function escalates the entire call chain.

  • GitHub & GitLab Integration

    Turns findings into tracked issues with regulatory references and mandatory corrective actions. Integrates directly into your engineering workflow for sprint-ready remediation.

  • Real-Time Scan Dashboard

    Findings stream to your dashboard as they are discovered via Server-Sent Events. Watch your compliance posture build in real time as files are analyzed.

EU AI Act Risk Tiers: What Category Is Your AI System?

The EU AI Act classifies AI systems into four tiers. Sentinel automatically identifies which tier applies to every component in your codebase.

Prohibited AI Systems (Article 5) — Banned from August 2025

Applies to: Emotion recognition in workplaces and educational institutions, social scoring systems, real-time remote biometric identification in public spaces, subliminal manipulation systems, AI that exploits vulnerabilities of specific groups.

Obligations: Immediate withdrawal of functionality. No compliance path — architectural redesign required. Full halt of data processing activities.

Sentinel detection: Flags offending files, identifies the specific prohibited practice, and suggests alternative architectures that achieve business goals without violating Article 5.

High-Risk AI Systems (Annex III) — Deadline August 2026

Applies to: AI used in employment and recruitment (CV screening, performance assessment), credit scoring and insurance risk, educational assessment, law enforcement and judicial decisions, critical infrastructure management, biometric categorization systems.

Obligations: Establish a formal risk management system with continuous monitoring. Implement comprehensive audit logging. Design in human oversight mechanisms. Create and maintain technical documentation conformity assessment files.

Sentinel detection: Identifies missing human-in-the-loop logic, absent audit trail implementations, and generates draft technical documentation templates.

General-Purpose AI (GPAI) — Articles 51–55

Applies to: Foundation models, large language models (LLMs), diffusion models, and general-purpose generative AI systems with systemic risk potential (training compute above 10^25 FLOPs).

Obligations: Public transparency obligations including model cards. Copyright law compliance and training data reporting. Systemic risk assessment and mitigation plan for large-scale models.

Sentinel detection: Maps foundation model API calls (OpenAI, Anthropic, Google) to transparency requirements and identifies missing disclosure mechanisms.

Limited Risk AI (Article 50) — Transparency Obligations

Applies to: Chatbots and conversational AI, deepfake generation systems, AI-generated content systems, emotion recognition systems used outside prohibited contexts.

Obligations: Mandatory disclosure to users that they are interacting with an AI system. Watermarking and labeling of all AI-generated content.

Sentinel detection: Scans user interface code for mandatory AI disclosure strings and verifies watermarking or content labeling logic is present.

EU AI Act Compliance Pricing — Starting at €49/month

Traditional EU AI Act audits from compliance consultants cost €30,000–€100,000 per system. Sentinel delivers continuous automated compliance monitoring starting at €49/month — equivalent to roughly one hour of consultant time.

  • Starter — €49/month

    For individual developers and small teams. Includes 20 scans per month, 3 repositories, findings register with article references, GitHub issue creation, and 30 days of scan history.

  • Growth — €199/month

    For engineering squads and compliance teams. Includes 100 scans per month, unlimited repositories, Taskboard scan integration, priority support, PDF compliance reports, and 12 months of scan history.

  • Enterprise — Custom pricing

    For large-scale organizations. Includes unlimited scans, unlimited repositories, on-premise gateway deployment, dedicated account manager, SSO integration, and SLA guarantees.

Frequently Asked Questions about EU AI Act Compliance

Is Sentinel a legally certified EU AI Act compliance assessment?
Sentinel is a developer audit tool — a linter for EU AI Act obligations. It identifies compliance risks at the code level and generates the technical documentation you need before engaging legal counsel. It is not a legal certification and does not replace qualified legal advice. Think of it as the step before your lawyer: you arrive knowing exactly what needs to be remediated.
Does my source code leave my environment during a scan?
Your repository is cloned into a temporary secure buffer, analyzed, and immediately deleted after the scan completes. Code snippets (up to 3,000 characters per file) are sent to the Google Gemini API for intent analysis. We do not store your source code. Full details are in our Privacy Policy.
Which programming languages does Sentinel scan?
Sentinel currently scans Python (.py), JavaScript (.js), TypeScript (.ts), React (.jsx, .tsx), JSON configuration files, and Jupyter notebooks (.ipynb). This covers the vast majority of AI system implementations. R, Java, Go, and Rust are on the roadmap.
What if my AI system is classified as High Risk under Annex III?
A high-risk classification does not mean you must stop shipping. It means specific Article obligations apply: risk management documentation, audit logging, human oversight mechanisms, and technical conformity assessment files. Sentinel tells you exactly which files are missing which implementations and generates suggested code fixes for each gap.
When does the August 2026 EU AI Act deadline apply to my company?
The August 2, 2026 deadline applies to high-risk AI systems listed in Annex III — including employment tools, credit scoring, education assessment, law enforcement applications, and critical infrastructure AI. If your AI system makes or significantly influences consequential decisions in these domains, you must be compliant before that date. Sentinel helps you determine if you are in scope and what compliance gaps exist.
Does the EU AI Act apply to companies outside the EU?
Yes. The EU AI Act has extraterritorial reach similar to GDPR. Any company whose AI systems are used by people in the EU, or whose AI outputs affect people in the EU, must comply — regardless of where the company is headquartered. US, UK, and global companies building AI used in Europe are in scope.

EU AI Act Insights & Updates

Expert analysis, compliance guides, and regulatory updates from the Quethos Sentinel team.

Read all EU AI Act articles →