HANALEI.DEV PORTFOLIO / Policy Analysis

EU AI Act Compliance Mapping

A comparative analysis mapping EU AI Act requirements to existing NIST AI RMF and ISO controls, identifying gaps and proposing practical compliance pathways for organizations in the early stages of AI governance maturity.

Document TypeCompliance Analysis
FrameworksEU AI Act, NIST AI RMF, ISO 42001
AudienceCompliance, Legal, AI Teams
Version1.0

Why This Mapping Matters

The EU AI Act, which entered into force in August 2024, is the world's first comprehensive legal framework for artificial intelligence. It applies to organizations that place AI systems on the EU market or whose AI systems affect people in the EU, regardless of where the organization is headquartered.

For organizations that have already invested in NIST AI RMF or ISO 27001 / ISO 42001 compliance work, significant overlap exists. This analysis identifies where existing controls satisfy EU AI Act requirements, where partial coverage leaves gaps, and where entirely new compliance activities are needed.

Scope Note

This mapping focuses on High-Risk AI system obligations under Chapter III of the EU AI Act, which carries the most substantial compliance burden. General-purpose AI model (GPAI) obligations under Title VIII are addressed separately in the gap analysis section.

EU AI Act Risk Tiers

The EU AI Act classifies AI systems into four risk tiers, each with distinct obligations. Understanding which tier applies to your AI use cases is the prerequisite for all compliance work.

Unacceptable Risk
Prohibited
AI practices that pose unacceptable risks to fundamental rights. Banned outright.
Social scoring by governments Real-time biometric surveillance in public spaces (with limited exceptions) Subliminal manipulation systems Exploitation of vulnerable groups
High Risk
Heavily Regulated
AI in critical infrastructure, education, employment, essential services, law enforcement, migration, and justice.
AI-assisted hiring and performance evaluation Credit scoring and financial services AI Medical device AI AI in educational assessment
Limited Risk
Transparency Required
AI systems that interact with users must disclose their AI nature. Lighter compliance burden.
Chatbots and conversational AI AI-generated content Emotion recognition systems Deepfake generation tools
Minimal Risk
Voluntary Codes
AI with minimal risk to rights or safety. No mandatory obligations but voluntary codes of conduct encouraged.
Spam filters AI-powered recommendation engines Productivity assistants General-purpose drafting tools

Requirements Mapping: EU AI Act to NIST mamp; ISO

The table below maps the primary High-Risk AI system obligations under the EU AI Act to corresponding controls in the NIST AI RMF 1.0 and ISO 42001. Gap status indicates whether existing framework adoption satisfies, partially satisfies, or does not address the EU AI Act requirement.

EU AI Act Requirement NIST AI RMF ISO 42001 Gap Status Analysis
Risk Management System Art. 9: Ongoing risk identification, evaluation, and mitigation throughout the AI lifecycle GOVERN 1.1, MAP 1.1, MEASURE 2.1, MANAGE 1.1 ISO 42001 Clause 6.1, 8.4 Covered NIST AI RMF's four-function structure (Govern, Map, Measure, Manage) directly satisfies Art. 9 requirements. Organizations with an implemented RMF profile can map their risk management documentation to EU AI Act obligations with minimal additional work.
Data Governance Art. 10: Training data quality, relevance, and bias examination MAP 2.3, MEASURE 2.2, MANAGE 2.2 ISO 42001 Clause 8.5 Partial NIST and ISO address data quality and bias evaluation but do not specify the documentation depth required by Art. 10. EU AI Act requires formal data governance procedures with written records of data sources, preprocessing steps, and bias examination results. Organizations must supplement existing controls with formal documentation artifacts.
Technical Documentation Art. 11 mamp; Annex IV: Comprehensive technical documentation before market placement GOVERN 1.7, MAP 5.1 ISO 42001 Clause 7.5 Gap EU AI Act Annex IV specifies a detailed documentation template that exceeds what NIST or ISO require. Required elements include: general system description, design specifications, training methodology, performance metrics, and known limitations. Most organizations will need to create new documentation artifacts specifically to meet this requirement.
Record-Keeping Art. 12: Automatic logging of system operation for post-market monitoring MANAGE 3.1, MEASURE 1.1 ISO 42001 Clause 9.1 Partial NIST and ISO both address monitoring but Art. 12 requires automatic, tamper-evident logs retained for a minimum period (10 years for some high-risk categories). Organizations must verify that logging infrastructure meets retention and tamper-evidence requirements and that logs are scoped to capture the specific events Art. 12 mandates.
Transparency to Users Art. 13: Clear instructions for use; disclosure of capabilities and limitations GOVERN 1.4, MAP 5.2 ISO 42001 Clause 8.6 Partial NIST's GOVERN function addresses transparency principles, but EU AI Act Art. 13 requires specific written instructions for use provided to deployers. These must cover intended purpose, performance limitations, foreseeable misuse, and human oversight measures. Most NIST implementations do not produce this artifact in the required form.
Human Oversight Art. 14: Effective human oversight measures built into system design GOVERN 1.4, MANAGE 4.1 ISO 42001 Clause 8.7 Covered Human oversight is a core concept in both NIST AI RMF and ISO 42001. Organizations with mature implementations of these frameworks will have documented human review requirements, override mechanisms, and accountability structures that satisfy Art. 14. Gap risk is low for organizations with established HITL policies.
Accuracy, Robustness, Cybersecurity Art. 15: Performance standards and resilience against adversarial attacks MEASURE 2.5, MANAGE 2.2 ISO 42001 Clause 8.4, ISO 27001 A.12 Partial Cybersecurity controls from ISO 27001 and NIST CSF map to Art. 15 security requirements. However, AI-specific robustness requirements (accuracy metrics, error rates, performance under distribution shift) go beyond traditional cybersecurity scope. Organizations must supplement security controls with AI-specific performance benchmarking and adversarial testing documentation.
Conformity Assessment Art. 43: Self-assessment or third-party conformity assessment before deployment MEASURE 1.1, MANAGE 1.3 ISO 42001 Clause 9.2 Gap EU AI Act requires formal conformity assessment with specific documentation and, for some high-risk categories, mandatory third-party assessment by a notified body. Neither NIST AI RMF nor ISO 42001 produce the EU Declaration of Conformity artifact or engage notified bodies. This is a new compliance activity with no direct analog in existing frameworks.
Post-Market Monitoring Art. 72 mamp; 73: Ongoing monitoring plan; serious incident reporting to authorities MANAGE 3.2, MEASURE 1.1 ISO 42001 Clause 9.1 Partial NIST and ISO address ongoing monitoring, but EU AI Act requires a formal post-market monitoring plan submitted to EU authorities and a serious incident reporting obligation (within 15 days of becoming aware of a serious incident). Regulatory notification workflows are outside the scope of both frameworks.
GPAI Transparency Art. 53 mamp; 55: Documentation and copyright compliance for general-purpose AI models GOVERN 6.2 ISO 42001 Clause 7.5 Gap GPAI obligations are largely new territory with no direct NIST or ISO parallel. Organizations deploying or building on GPAI models must maintain technical documentation on training data (including copyright compliance), publish summaries of training content, and implement policies to respect rightsholders. These requirements have no established control analog in existing frameworks.

Maturity-Based Compliance Roadmap

The pathway below is designed for organizations in the early stages of AI governance maturity that have some NIST AI RMF or ISO 27001 foundation and need to build toward EU AI Act compliance without starting from scratch.

Stage 1
Foundation
Months 1 to 3
Inventory all AI systems and classify against EU AI Act risk tiers Identify which systems fall under High-Risk or GPAI obligations Map existing NIST AI RMF documentation to Annex IV requirements Assign a legal lead with EU regulatory experience Register organization in EU AI Act compliance calendar (key deadlines vary by tier)
Stage 2
Gap Closure
Months 4 to 9
Draft Annex IV technical documentation for each High-Risk system Implement tamper-evident logging meeting Art. 12 retention requirements Develop instructions-for-use documents required under Art. 13 Conduct adversarial testing and document accuracy and robustness benchmarks Build serious incident reporting workflow to EU market surveillance authorities Engage notified body for systems requiring third-party conformity assessment
Stage 3
Certification
Month 10 onward
Complete self-assessment or notified body assessment Issue EU Declaration of Conformity for applicable systems Affix CE marking where required Register High-Risk systems in EU AI Act database Activate post-market monitoring plan Schedule annual compliance review cadence