Skip to content

ISO 27001 and AI: What Changes in Your ISMS

(updated: ) · 8 min read · 27kay

AI is both a tool and a risk

AI changes your ISMS in two directions. AI systems you deploy or rely on introduce new risks that your risk assessment must address - data flowing to third-party models, automated decisions with security implications, and supply chain dependencies on AI providers. At the same time, AI-powered tools are transforming how security and compliance work gets done - from GRC platforms that automate evidence collection to threat detection systems that spot anomalies humans miss. Managing both sides is what your ISO 27001 implementation needs to get right.

AI systems as a security risk

When your organization uses AI - whether that is a large language model for customer support, an ML pipeline for fraud detection, or a coding assistant for your developers - you are introducing information assets that need to be risk-assessed like any other.

Data exposure. AI models often need access to organizational data for training, fine-tuning, or prompt context. If employees paste customer data into a public AI chatbot, that data has left your control. Your risk assessment needs to cover where data goes when AI tools are used, whether that data is stored or used for model training, and what contractual protections exist.

Third-party dependency. Most organizations use AI through APIs from providers like OpenAI, Anthropic, Google, or Microsoft. These are suppliers under your ISMS, and Annex A controls A.5.19 through A.5.22 (supplier relationships) apply. You need to assess their security posture, understand their data processing practices, and have contractual agreements covering information security requirements.

Automated decision-making. AI systems that make or influence decisions - access approvals, threat scoring, content filtering - need oversight. An AI tool that automatically blocks network traffic or flags user behavior creates security implications if it gets things wrong. Your incident management processes (A.5.24-A.5.28) should account for AI-generated false positives and false negatives.

Model integrity. If you train or fine-tune models, the training data and the model itself become information assets. Poisoned training data or compromised models can produce unreliable or malicious outputs. Configuration management (A.8.9) and change management processes should cover model updates and deployments.

Which ISO 27001 controls apply to AI

You do not need a separate framework to manage AI security risks. The 2022 version’s Annex A already provides the relevant controls:

ControlHow it applies to AI
A.5.9 - Asset inventoryRegister AI tools, models, and datasets as information assets with defined owners
A.5.10 - Acceptable useDefine policies for which AI tools employees may use and what data they may input
A.5.12 - ClassificationClassify data used in AI systems - training data, prompts, and outputs
A.5.19-22 - Supplier managementAssess AI providers as suppliers; review their security, data handling, and SLAs
A.5.23 - Cloud servicesMost AI tools are cloud-based - apply cloud security controls
A.8.9 - Configuration managementDocument AI system configurations, model versions, and deployment settings
A.8.10 - Information deletionEnsure AI providers delete your data per contractual terms; verify retention policies
A.8.11 - Data maskingAnonymize or mask sensitive data before using it in AI training or prompts
A.8.12 - Data leakage preventionMonitor and control what data flows to external AI services
A.8.16 - MonitoringMonitor AI system behavior for anomalies, unexpected outputs, or misuse

The key is treating AI systems as you would any other information processing system. They go in your asset register, they get risk-assessed, and the controls you select get documented in your Statement of Applicability.

ISO 42001 - the dedicated AI management system

ISO/IEC 42001, published in December 2023, is the first international standard specifically for AI management systems. Where ISO 27001 addresses information security broadly, ISO 42001 addresses the governance, risk, and lifecycle management of AI systems - AI impact assessments, responsible AI principles, data quality requirements, and AI-specific lifecycle controls.

ISO 42001 follows the same Plan-Do-Check-Act structure and Harmonized Structure as ISO 27001, so the two systems integrate naturally. If your organization already has an ISO 27001 ISMS, ISO 42001 works as a complementary layer - similar to how ISO 27701 adds privacy management. Your existing risk methodology, internal audit program, and management review process extend to cover AI-specific concerns without building a separate system from scratch.

Not every organization needs ISO 42001 certification. If you use off-the-shelf AI tools through APIs, ISO 27001’s supplier and cloud controls are likely sufficient. But if you develop AI models, embed AI in your products, or use AI in high-risk decision-making, ISO 42001 gives you the management system framework to govern it properly. Major cloud providers and GRC platforms already support ISO 42001 alongside ISO 27001, making cross-framework compliance practical.

AI in security and compliance operations

AI is not just a risk to manage - it is reshaping how security and compliance work gets done.

Threat detection and SIEM. Modern security information and event management platforms use machine learning to identify anomalous patterns across log data. They detect unusual login patterns, data exfiltration attempts, and lateral movement faster than rule-based systems. This directly supports control A.8.16 (monitoring activities).

GRC and compliance automation. Compliance platforms now embed AI across their workflows - automated evidence collection, control mapping across multiple frameworks, questionnaire auto-completion, and AI-generated remediation guidance. For a 50-person company managing both ISO 27001 and SOC 2, this can cut audit preparation time significantly. These platforms handle the repetitive evidence gathering and cross-referencing that used to consume most of an audit cycle, though they do not replace human judgment on control design and risk decisions.

Vulnerability management. AI-assisted tools can prioritize vulnerabilities based on your specific environment, exploitability, and business context rather than relying solely on CVSS scores. This makes your vulnerability management process (A.8.8) more effective by focusing remediation effort where it matters most.

Security awareness. AI-powered phishing simulations generate realistic, contextual phishing emails tailored to your organization. This improves the effectiveness of awareness training (Clause 7.3) compared to generic templates.

Audit and documentation. AI tools can review documentation for completeness, cross-reference controls against evidence, and identify gaps in your ISMS. This does not replace human judgment in internal audits, but it makes the process faster and more thorough.

The important principle remains: AI tools supporting your ISMS are themselves in scope. If your GRC platform uses AI for evidence analysis, the platform is a supplier, the AI features are assets, and the data they process needs to be classified and protected.

The regulatory context

The EU AI Act is now in effect with a phased implementation schedule running through 2027. Organizations with an ISO 27001 ISMS are already well-positioned to meet many of its requirements - risk assessment, documentation, human oversight, and supplier management are core to both. If your organization deploys AI systems classified as high-risk under the EU AI Act, your ISMS processes can serve as the management system backbone, with ISO 42001 providing the AI-specific governance layer.

For organizations also subject to GDPR, AI systems that process personal data trigger additional requirements around automated decision-making (Article 22), data protection impact assessments (Article 35), and transparency. ISO 27701 bridges the gap between your ISMS and privacy-specific AI requirements.

Common mistakes

Ignoring AI in your risk assessment. If employees are using AI tools - and they almost certainly are - those tools need to appear in your risk register. Shadow AI is the new shadow IT. An acceptable use policy that explicitly addresses AI tools is the minimum starting point.

Treating all AI tools the same. A developer using a coding assistant with code snippets is a different risk profile from a customer service team pasting customer complaints into an LLM. Risk-assess each use case individually and apply proportionate controls.

Assuming AI providers are automatically secure. Enterprise AI providers generally have strong security, but you still need to verify. Check for SOC 2 reports, review data processing agreements, understand where data is processed geographically, and confirm whether your data is used for model training.

Over-relying on AI compliance tools. GRC platforms with AI features can accelerate your compliance work, but they do not replace understanding your own controls. If your team cannot explain why a control exists without checking the platform, you have a knowledge gap that an auditor will find.

How 27kay can help

We help organizations bring AI into their ISO 27001 ISMS - both as a managed risk and as a tool for better security operations. Whether you need an AI acceptable use policy, a risk assessment covering your AI tools and providers, guidance on how ISO 42001 complements your existing ISMS, or help with EU AI Act compliance, we can help you get the controls right without slowing down adoption.

Using AI tools and want to make sure your ISMS reflects that? Let’s talk - we will help you assess the risks and update your controls to match how your organization actually works.