EU AI Act · Regulation (EU) 2024/1689

EU AI Act compliance for AI agent governance

Article-by-article mapping of how Vortalis helps you meet the EU AI Act's requirements for high-risk AI systems. Technical controls, not checkbox compliance.

Article 9

Risk Management System

Providers of high-risk AI systems shall establish, implement, document, and maintain a risk management system. This system must identify and analyse known and reasonably foreseeable risks, estimate and evaluate risks that may emerge, and adopt suitable risk management measures.

How Vortalis Addresses This

  • Deny-by-default policy engine blocks all agent actions unless explicitly permitted, eliminating the risk of unintended data access or operations.
  • Anomaly detection with statistical baselines identifies emerging risks in real time — unusual access volumes, off-hours activity, new action types, and inter-agent delegation anomalies.
  • Global and per-service kill switches provide immediate risk mitigation, propagated instantly across all running instances.
  • Declarative policies with version history and validation before deployment ensure risk management measures are documented, reviewable, and auditable.
  • Inter-agent governance with configurable delegation limits prevents uncontrolled agent-to-agent escalation.
Article 10

Data and Data Governance

High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed on the basis of training, validation, and testing data sets that meet quality criteria. Data governance and management practices shall address data collection, data preparation, and relevant assumptions.

How Vortalis Addresses This

  • Data tokenisation replaces sensitive fields with opaque tokens before they reach AI agents, enforcing data minimisation at the infrastructure level.
  • Field-level access control policies define exactly which data each agent can access, with different rules per service, action type, and workflow state.
  • Tenant-scoped data isolation ensures no cross-contamination between organisations, with dedicated encryption and scoped baselines.
  • Audit trails record every data access with full context, enabling governance teams to review data handling practices systematically.
Article 11

Technical Documentation

The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up to date.

How Vortalis Addresses This

  • Every policy, configuration change, and agent registration is versioned and auditable, providing a living technical documentation trail.
  • Automated conformance testing validates governance guarantees and produces structured compliance evidence on demand.
  • Structured audit exports tagged by regulation can be compiled into technical documentation packages for regulatory review.
  • Policy files serve as human-readable, machine-enforceable documentation of AI system behaviour constraints.
Article 12

Record-Keeping

High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system. Logging capabilities shall ensure a level of traceability appropriate to the intended purpose of the AI system.

How Vortalis Addresses This

  • Cryptographically-chained, append-only audit log ensures tamper-evident record-keeping throughout the system's lifetime.
  • Every log entry captures the full context needed for compliance: who acted, what was attempted, and what the policy engine decided.
  • Logs are tagged by regulation (EU AI Act, GDPR, DORA, etc.) for efficient compliance reporting and filtered exports.
  • SIEM integration enables export to your existing security infrastructure for centralised monitoring and long-term retention.
  • Hash chain verification allows auditors to independently confirm that no log entries have been modified, deleted, or reordered.
Article 13

Transparency and Provision of Information

High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately.

How Vortalis Addresses This

  • Policy evaluation results are logged alongside every action, making the decision-making process fully transparent — every permit, deny, or approval requirement is explained.
  • Inter-agent delegation chains are fully traceable, showing exactly how instructions propagate between agents with policy evaluation at each hop.
  • Real-time anomaly detection provides explainable alerts: each flagged behaviour includes the statistical basis for the anomaly determination.
  • Dashboard views present agent activity, policy evaluations, and governance metrics in an accessible format for deployers and compliance teams.
Article 14

Human Oversight

High-risk AI systems shall be designed and developed in such a way as to allow for effective human oversight during the period the system is in use. Human oversight shall aim to prevent or minimise the risks to health, safety, or fundamental rights.

How Vortalis Addresses This

  • Human-in-the-loop approval workflows pause agent operations and route sensitive requests to human reviewers with full context.
  • Reviewers see the requesting agent, target action, data scope, policy evaluation, and historical context before making approval decisions.
  • Every approval or denial decision is immutably logged with the reviewer's identity, timestamp, and reasoning.
  • Kill switches enable immediate human intervention — any operator can halt all agent activity globally or per-service within seconds.
  • Role-based access control with four admin roles ensures appropriate separation of duties for oversight functions.
Article 15

Accuracy, Robustness, and Cybersecurity

High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.

How Vortalis Addresses This

  • Enterprise-grade encryption with pluggable key management protects all sensitive data at rest and in transit.
  • Runtime sandboxing executes custom adapters in fully isolated processes, preventing resource abuse and credential exfiltration.
  • Static analysis validates adapter code before execution, blocking unsafe operations at the source.
  • Tokens are generated with cryptographically secure entropy and compared using timing-safe methods to resist brute-force and side-channel attacks.
  • Statistical anomaly baselines adapt to normal operating patterns, providing resilient detection of cybersecurity threats throughout the system's lifecycle.
  • Distributed rate limiting prevents abuse across horizontally-scaled deployments.
Article 17

Quality Management System

Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. The quality management system shall be documented in a systematic and orderly manner.

How Vortalis Addresses This

  • Declarative policies provide a systematic, version-controlled quality management framework — policies are validated before deployment and changes are audited.
  • Automated conformance testing provides continuous quality verification against our governance specification.
  • Structured role-based access control ensures quality management processes have appropriate governance and separation of duties.
  • Anomaly detection with configurable severity thresholds enables systematic quality monitoring with automated responses.
Article 26

Obligations of Deployers

Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use. Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions of use.

How Vortalis Addresses This

  • Policy engine enforces usage constraints defined by the provider, ensuring deployers operate AI systems within intended parameters.
  • Real-time monitoring dashboards give deployers visibility into system operation, agent behaviour, and policy compliance.
  • Automated anomaly detection alerts deployers to operational deviations that may indicate misuse or drift from intended use.
  • Audit trail exports provide deployers with the evidence they need to demonstrate compliant operation to regulators.
Article 72

Reporting of Serious Incidents

Providers of high-risk AI systems placed on the Union market shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred.

How Vortalis Addresses This

  • Anomaly detection flags potentially serious incidents in real time, with severity-based escalation and automated response actions.
  • Structured audit exports provide the complete evidence chain regulators need: timeline of events, agent actions, policy evaluations, and data access records.
  • Tamper-evident logging ensures incident records cannot be modified after the fact, maintaining their evidentiary integrity.
  • Regulation-tagged log entries enable efficient extraction of EU AI Act-specific events for incident reports.

EU AI Act compliance questions

Common questions about how Vortalis helps you meet the EU AI Act's requirements for high-risk AI systems.

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, adopted in 2024. It takes a risk-based approach: high-risk AI systems (including those used in healthcare, financial services, legal, and critical infrastructure) must comply with strict requirements around transparency, human oversight, risk management, and record-keeping. Key provisions begin applying from August 2025, with full enforcement by August 2027.

Vortalis provides the technical infrastructure that maps to core EU AI Act requirements — tamper-evident audit trails (Article 12), human oversight workflows (Article 14), risk management controls (Article 9), and security measures (Article 15). Compliance also requires organisational measures, documentation, and legal review. Vortalis handles the hardest technical requirements so your team can focus on the rest.

The EU AI Act defines four risk levels: unacceptable (banned), high-risk, limited risk, and minimal risk. AI systems used in healthcare, finance, legal, HR, and critical infrastructure are typically classified as high-risk. If your AI agents process sensitive data or make consequential decisions, Vortalis is designed for your use case.

Every action that passes through Vortalis is recorded in a cryptographically-chained audit log. This creates a tamper-evident, append-only record that can be exported to your SIEM, filtered by regulation tag, and presented to auditors. Each entry captures the full context needed for compliance review.

Vortalis supports human-in-the-loop approval workflows at the policy level. When an agent attempts a sensitive action — accessing protected data, delegating to another agent, or operating outside normal parameters — the request is paused and routed to a human reviewer. The reviewer sees full context, approves or denies, and the decision is logged immutably.

The EU AI Act works alongside the GDPR, not instead of it. Vortalis addresses both: data tokenisation prevents agents from accessing raw personal data (supporting GDPR data minimisation), while audit trails and access controls satisfy AI Act transparency requirements. We also align with DORA for financial services and DSPT for UK healthcare.

Yes. The EU AI Act applies to any organisation that places AI systems on the EU market or whose AI outputs affect people in the EU — regardless of where the organisation is based. If your AI agents serve EU customers or process EU residents' data, the Act likely applies to you. Vortalis is headquartered in London with UK data residency.

Agentic Commerce Governance

See how Vortalis governs AI agent commerce across Mastercard Agent Pay, Google UCP, and OpenAI ACP.

Read the mapping

Start building compliant AI today

Vortalis gives you the technical controls the EU AI Act requires. Request early access and we'll help you map your compliance roadmap.