What is ISO 42001?
The international standard for responsible AI management
Bottom line
ISO/IEC 42001:2023 is the world's first certifiable standard for AI Management Systems (AIMS). Published in December 2023, it gives organizations a framework for governing AI responsibly — from development through deployment and retirement. Think of it as ISO 27001 for AI: it doesn't tell you what to build, it tells you how to govern what you build.
What does it actually do?
ISO 42001 provides a structured management system — a set of policies, processes, risk assessments, and controls — that ensures your organization uses AI responsibly, transparently, and with proper governance. It uses the same Plan-Do-Check-Act methodology as other ISO standards, making it familiar to anyone who's worked with ISO 27001 or ISO 9001.
The standard covers the full AI lifecycle: design, development, deployment, monitoring, and retirement. It addresses governance, risk management, transparency, ethical use, data quality, and human oversight.
Key themes across the standard
While ISO 42001 doesn't define a numbered list of principles, these five themes recur throughout its clauses and controls as foundational to trustworthy AI. Click any theme to learn more.
AI systems face unique security threats beyond traditional software — adversarial inputs that trick models, data poisoning during training, model extraction attacks, and prompt injection. ISO 42001 requires controls that protect not just the infrastructure, but the models themselves and the data pipelines that feed them.
A company deploying an LLM-powered chatbot implements input sanitization to prevent prompt injection, restricts model API access with role-based controls, encrypts model weights at rest, and monitors for unusual query patterns that might indicate model extraction attempts.
AI safety goes beyond traditional software reliability. It addresses whether AI outputs could cause harm to individuals or groups — through incorrect medical advice, unsafe autonomous decisions, or cascading failures in critical systems. The standard requires proportional human oversight based on the risk level of each AI system.
A healthcare AI that suggests diagnoses is classified as high-risk and requires a physician to review every recommendation before it reaches the patient. The system includes confidence scores, flags edge cases for manual review, and has a kill switch that routes all cases to humans if model performance degrades.
Fairness requires systematically identifying and mitigating bias across the AI lifecycle — in training data, model design, and deployment context. ISO 42001 mandates that organizations assess impacts on different demographic groups and implement controls to prevent discriminatory outcomes.
A hiring platform runs demographic parity tests on its resume screening model before each release. When testing reveals the model scores candidates from certain universities disproportionately higher, they retrain with blind features and implement ongoing monitoring dashboards that alert when selection rates diverge across groups.
Transparency means stakeholders understand when AI is being used, what it does, how decisions are made, and what its limitations are. ISO 42001 requires organizations to inform users about AI involvement, provide appropriate explanations, and make documentation accessible to different audiences.
A bank's loan decisioning system includes a customer-facing explanation page that describes what factors the AI considers, provides a plain-language reason for each decision, offers a clear appeals process, and displays a disclaimer about the model's known limitations — all without exposing proprietary algorithms.
AI is only as good as its data. This principle covers the entire data lifecycle: provenance tracking, quality assessment, bias detection in datasets, consent management, appropriate retention, and secure disposal. Poor data quality is the root cause of most AI failures in production.
A fraud detection company maintains a data registry documenting each dataset's source, collection method, demographic breakdown, and consent status. Automated pipelines check for missing values, label consistency, and distribution drift before any data enters the training pipeline. Quarterly audits verify data quality metrics against established thresholds.
The 10 clauses
ISO 42001 has 10 clauses. Clauses 1–3 set the foundation (scope, references, definitions). Clauses 4–10 contain the mandatory requirements your organization must meet. Click any clause to see details and examples.
- Applies to any organization that provides or uses AI-based products or services
- Covers the full lifecycle: design, development, deployment, monitoring, and retirement
- Technology-agnostic — applies to machine learning, deep learning, rule-based systems, and hybrid approaches
- Scalable to any organization size, from startups to global enterprises
A SaaS company building an LLM-powered customer support tool would scope their AIMS to cover that specific product, its training data pipeline, and its deployment infrastructure.
- ISO/IEC 22989 provides the foundational AI vocabulary used throughout ISO 42001
- Additional standards in the ISO 42000 series have been published or are in development (e.g., ISO 42005:2025 for AI impact assessment)
- Understanding these references helps ensure consistent interpretation of requirements
When ISO 42001 refers to an 'AI system,' the precise definition comes from ISO/IEC 22989 — ensuring everyone in your organization uses the same terminology during implementation.
- Defines 'AI system' as a system that uses AI to generate outputs like predictions, recommendations, or decisions
- Clarifies terms like 'interested party' (stakeholders affected by your AI), 'risk' (effect of uncertainty on objectives), and 'continual improvement'
- Aligned with terminology from ISO/IEC 22989 and the broader ISO management system vocabulary
- Having shared definitions prevents misunderstandings during implementation and audits
Your internal AI chatbot, a third-party fraud detection API you integrate, and an ML model that recommends products — all qualify as 'AI systems' under the standard's definition.
- Identify external issues: regulations (EU AI Act, sector-specific rules), market expectations, competitive landscape
- Identify internal issues: organizational culture, technical maturity, existing governance frameworks
- Map all interested parties and their requirements (customers, regulators, employees, partners)
- Define the scope and boundaries of your AIMS — which AI systems, business units, and processes are included
A healthcare AI company would identify HIPAA, FDA software regulations, and the EU AI Act as external factors; their data science team's maturity and existing ISO 27001 as internal factors; and patients, doctors, and regulators as key interested parties.
- Top management must demonstrate active commitment — not just sign off, but actively participate in governance decisions
- Establish and communicate an AI policy that includes commitments to responsible AI, compliance, and continual improvement
- Assign clear roles: AIMS Manager, AI Ethics Lead, Data Steward, Model Validator
- Ensure adequate budget, people, tools, and training are allocated to support the AIMS
The CTO sponsors the AIMS initiative, the VP of Engineering owns the AI policy, a dedicated AI Governance Lead coordinates day-to-day operations, and each AI product team has a designated compliance champion.
- Conduct a comprehensive AI risk assessment covering: bias and fairness, safety, transparency, privacy, security, and reliability
- Set measurable AI objectives aligned with your AI policy (e.g., 'reduce model bias by 30% in Q3')
- Develop a risk treatment plan with specific controls, owners, and timelines
- Create a Statement of Applicability (SoA) documenting which Annex A controls apply and why
During risk assessment, you discover your recommendation engine has a gender bias in job suggestions. The risk treatment plan assigns the ML team to implement fairness constraints, sets a 6-week deadline, and schedules monthly bias audits going forward.
- Define competence requirements for each AI-related role (data scientists, MLOps engineers, product managers)
- Provide training on responsible AI principles, your AI policy, and AIMS procedures
- Ensure all staff are aware of their obligations under the AIMS
- Establish communication channels for reporting AI incidents, concerns, and improvement ideas
- Maintain documented information (records, policies, procedures) as required by the standard
You create an 'AI Governance 101' onboarding module for all new engineers, run quarterly bias-awareness workshops, and set up a #ai-incidents Slack channel where anyone can report concerns about model behavior.
- Implement operational controls for the full AI lifecycle: data collection, training, testing, deployment, monitoring, retirement
- Conduct AI impact assessments before deploying new AI systems or making significant changes
- Manage third-party AI suppliers and ensure they meet your governance requirements
- Establish processes for data quality management, model validation, and change management
- Implement human oversight mechanisms appropriate to the risk level of each AI system
Before launching a new credit scoring model, you run an impact assessment covering fairness across demographics, conduct adversarial testing, document the model card, set up drift monitoring alerts, and define the escalation path when the model flags for review.
- Define KPIs for your AIMS: model accuracy, bias metrics, incident response times, audit findings
- Conduct regular internal audits against ISO 42001 requirements and your own AI policy
- Monitor AI system performance continuously — not just accuracy, but fairness, drift, and reliability
- Hold management reviews at planned intervals to evaluate AIMS effectiveness and identify improvements
- Document all evaluation results and use them to drive continual improvement
You run quarterly internal audits where a cross-functional team reviews each AI system against its documented controls. Monthly dashboards track model drift, fairness scores, and incident counts. The leadership team reviews AIMS performance every 6 months.
- Establish a process for identifying and documenting nonconformities (gaps between requirements and reality)
- Implement corrective actions that address root causes, not just symptoms
- Track corrective actions to closure and verify their effectiveness
- Proactively identify opportunities for improvement beyond just fixing problems
- Adapt your AIMS as AI technology evolves, regulations change, and your organization grows
An internal audit finds that model retraining doesn't consistently trigger a new bias review. The corrective action adds an automated bias check to the CI/CD pipeline, the root cause (manual process) is documented, and a follow-up audit in 90 days confirms the fix is working.
Annex A: AI-specific controls
The most distinctive part of ISO 42001. Annex A contains controls across 9 categories (A.2 through A.10) addressing AI-unique risks. Click any category to see its controls and a real-world example.
- Define an AI policy aligned with organizational objectives and values
- Include commitments to responsible AI principles, compliance, and continual improvement
- Communicate the AI policy to all relevant personnel and interested parties
- Review and update the AI policy at planned intervals or when significant changes occur
A fintech company publishes an internal AI policy that mandates fairness testing before any model goes to production, requires human review of credit decisions, and commits to annual third-party bias audits — then makes a public summary available to customers.
- Assign clear accountability for AI governance at the leadership level
- Establish an AI governance committee or equivalent oversight body
- Define roles and responsibilities for AI system lifecycle activities
- Ensure separation of duties where AI decisions carry significant risk
- Integrate AI governance into existing organizational structures
A healthcare AI company creates an AI Ethics Board with representatives from engineering, legal, clinical, and patient advocacy. The board reviews all new AI deployments, has authority to halt releases, and reports quarterly to the CEO.
- Allocate computational resources appropriate for AI development, testing, and monitoring
- Ensure access to diverse, representative datasets for training and validation
- Provide tools and infrastructure for model monitoring, versioning, and rollback
- Maintain sufficient human expertise across AI-related roles
- Budget for ongoing AI governance activities including audits and training
An e-commerce company allocates dedicated GPU clusters for bias testing (separate from production training), licenses a model monitoring platform, and budgets for quarterly external fairness audits — ensuring governance isn't under-resourced.
- Conduct impact assessments before deploying AI systems or making significant changes
- Evaluate potential impacts on human rights, fairness, privacy, safety, and the environment
- Assess impacts on different demographic groups and vulnerable populations
- Document assessment methodology, findings, and mitigation measures
- Review impact assessments periodically and when system behavior changes significantly
Before deploying a resume screening AI, an HR tech company conducts an impact assessment covering gender, ethnicity, age, and disability bias. They test with synthetic and real-world data, document findings, identify a gender gap in technical roles, retrain with balanced data, and schedule quarterly re-assessments.
- Establish processes for each lifecycle phase: design, development, testing, deployment, monitoring, retirement
- Implement version control and change management for models and data pipelines
- Define acceptance criteria and validation procedures before deployment
- Monitor AI systems in production for drift, degradation, and emerging risks
- Establish procedures for decommissioning AI systems including data handling
A bank's credit scoring model follows a strict lifecycle: data scientists develop in a sandboxed environment, the model review board validates against fairness benchmarks, deployment requires sign-off from risk and compliance, production monitoring alerts on drift beyond thresholds, and the previous model version is kept for 90-day rollback.
- Establish data quality requirements including accuracy, completeness, and timeliness
- Document data provenance — where data comes from and how it was collected
- Assess training data for bias, representativeness, and relevance
- Implement data governance controls for labeling, storage, and access
- Manage data throughout its lifecycle including retention and secure disposal
A medical imaging AI company maintains a data registry tracking every dataset's source hospital, patient demographics, collection method, and consent status. Before training, they run automated checks for demographic balance and flag datasets that underrepresent certain populations — preventing the model from performing poorly on specific patient groups.
- Inform users when they are interacting with or subject to AI-driven decisions
- Provide clear explanations of what AI systems do and their intended purpose
- Communicate known limitations, confidence levels, and potential failure modes
- Make information available about how to contest or seek review of AI decisions
- Maintain documentation accessible to different stakeholder groups
A SaaS company's AI-powered hiring tool includes a candidate-facing disclosure page explaining: what the AI evaluates, what it doesn't, known limitations, how to request a human review, and a link to the company's AI policy. Recruiters receive a separate guide on interpreting AI recommendations and when to override them.
- Define acceptable use policies for AI systems appropriate to their risk level
- Implement human oversight mechanisms proportional to the impact of AI decisions
- Establish escalation procedures for AI outputs that require human review
- Monitor for misuse, unintended use, and emerging patterns of harm
- Provide training to operators and users on appropriate AI system use
A content moderation AI auto-removes clearly violating content but flags borderline cases for human review. Moderators can override any AI decision, and a weekly review samples auto-removed content to catch false positives. Usage policies prohibit using the AI for political speech moderation without human involvement.
- Assess AI-related risks from third-party suppliers, partners, and service providers
- Include AI governance requirements in contracts and service agreements
- Monitor third-party AI components for continued compliance and performance
- Provide customers with information needed to use your AI products responsibly
- Establish procedures for handling AI incidents involving third-party components
A company integrating a third-party LLM API includes contractual clauses requiring the provider to disclose training data practices, notify of model changes, and maintain an incident response SLA. They run monthly evaluations of the API's outputs for bias and accuracy, and maintain a fallback plan if the provider's governance practices change.
Annex B provides implementation guidance, Annex C covers organizational objectives and risk sources, and Annex D addresses cross-standard integration and domain-specific application.
Who is it for?
Any organization that develops, provides, or uses AI — regardless of size or industry. The standard scales from a 20-person startup to a global enterprise. It applies whether you build AI in-house, integrate third-party AI, or use AI-powered tools.
ISO 42001 vs ISO 27001
They're complementary, not competitors. Both use Annex SL structure, so organizations with ISO 27001 can leverage much of their existing work.
| ISO 27001 | ISO 42001 | |
|---|---|---|
| Focus | Information security | AI governance |
| Protects | Information assets | How AI uses information |
| Controls | Encryption, access, firewalls | Bias, fairness, transparency |
| Risks | Data breaches | Model drift, algorithmic harm |
| Published | 2005 (updated 2022) | December 2023 |
ISO 27001 protects the system. ISO 42001 governs the decisions.
Relationship to the EU AI Act
The EU AI Act is law. ISO 42001 is voluntary. But implementing ISO 42001 addresses many of the Act's requirements around risk management, documentation, transparency, and human oversight.
- Feb 2025: Prohibited AI practices in effect
- Aug 2025: GPAI model rules and governance
- Aug 2026: High-risk AI requirements fully applicable
- Penalties (tiered): Up to EUR 35M/7% (prohibited), EUR 15M/3% (high-risk), EUR 7.5M/1% (misinformation)