Governing AI Responsibly with ISO/IEC 42001
Join one of NQA's ISMS regional assessors, Adrian Brisett, as he discusses the development of AI and ISO 42001.
Once seen as tomorrow’s technology, Artificial Intelligence (AI) has fascinated us through both real-world applications and imaginative portrayals in film, often raising profound ethical questions. AI is rapidly transforming every sector that uses information technology. Alongside its benefits, AI also poses risks, ethical, societal, and operational, that demand effective governance. That’s where ISO/IEC 42001, the first global AI management system standard, comes in. To frame our discussion, here’s a quote from ISO: “Artificial Intelligence is increasingly applied across all sectors utilising information technology and is expected to be one of the main economic drivers. A consequence of this trend is that certain applications can give rise to societal challenges over the coming years.”
The aim of this session, and now this blog, is to explore how ISO/IEC 42001 supports organisations in addressing those societal challenges. Specifically, we’ll look at how the standard helps organisations adopt responsible AI governance frameworks that balance innovation with accountability.
What is Artificial Intelligence?
One of the most intriguing aspects of artificial intelligence (AI) is that its subject matter remains surprisingly difficult to define with precision. The term "Artificial Intelligence" dates back to its 1950 origins. One way to get closer to a definition is to revisit 1950, particularly the vision laid out by its early pioneers, such as Alan Turing. His foundational work set the stage for thinking about machines that could mimic human intelligence, offering a conceptual starting point that still guides the field today.In essence, AI refers to software applications designed to simulate human-like behaviours such as speech, reasoning, and vision, and to make decisions. It’s used across diverse sectors including healthcare, finance, marketing, and cybersecurity. AI is a broad ecosystem, encompassing:- Robotics – machines that act in the physical world.
- Natural Language Processing (NLP) – enabling machines to understand human language.
- Computer Vision – interpreting and responding to visual input.
- Generative AI (e.g., ChatGPT) – producing text, images, or code from prompts. ChatGPT’s public launch in November 2022 saw it reach 100 million users in just two months, making it the fastest-growing consumer app in histor

The political push for AI regulation
The political landscape reveals a clear trend: AI is no longer merely a technological issue, it has become a strategic priority for national governance, public safety, and ethical oversight.The EU AI Act is the world’s first comprehensive legal framework for AI. It directly addresses the risks posed by AI systems while positioning Europe as a global leader in responsible AI regulation. The Act aims to ensure that Europeans can trust the AI technologies they interact with. While most AI systems present minimal or no risk and hold great potential to address societal challenges, the Act highlights that certain types of AI pose significant risks, risks that must be actively managed to prevent harmful or unintended outcomes. The AI Act defines 4 levels of risks for AI systems can be seen below:
- The United States has taken a more sector-specific and voluntary approach. The NIST AI Risk Management Framework (2023) offers guidance to help organisations manage AI risks responsibly, while the recently introduced AI Action Plan (July 23, 2025) signals a shift toward more strategic federal involvement and oversight, especially in high-impact sectors.
- The United Kingdom has opted for a pro-innovation stance. Its AI White Paper (2023) outlines a light-touch, principles-based regulatory approach focused on encouraging innovation, with regulators applying context-specific guidance rather than rigid rules.
From an economic perspective, generative AI is proving to be far more than just another emerging technology, it’s a genuine productivity multiplier. McKinsey’s 2023 analysis suggests it could contribute up to $4.4 trillion every year to the global economy. This surge is being fuelled by major investment across a wide range of sectors. In customer service, AI-powered chatbots and virtual agents are delivering faster, more personalised support at scale. In content creation, marketing teams, media outlets, and designers are using generative models to produce high-quality material on demand. Software developers are accelerating delivery through AI-assisted code generation, while legal, HR, and compliance teams are automating repetitive but critical processes with greater accuracy.
Perhaps most excitingly, industries such as drug discovery, fintech, and manufacturing are harnessing AI to speed up innovation cycles, reduce costs, and bring complex products to market more quickly than ever before. As AI becomes more deeply embedded in the way companies operate, it also raises pressing questions about environmental sustainability. AI models, especially large-scale GenAI, consume significant computing power, which in turn drives energy use and carbon emissions. That’s where the integration of ISO 14001 and ISO 42001 can create a robust, climate-conscious approach to AI adoption.
ISO 14001 provides the foundation: a proven environmental management framework to help organisations monitor and reduce emissions, optimise resource use, and comply with climate-related regulations. ISO 42001, on the other hand, governs the responsible development, deployment, and oversight of AI systems, ensuring they are safe, ethical, transparent, and aligned with organisational values. When combined, these standards allow an organisation to address both the environmental and technological risks of AI in a single, joined-up governance model. ISO 42001 can be used to evaluate the energy profile of AI systems, ensure algorithmic efficiency, and align AI applications with sustainability goals. ISO 14001 ensures those goals are embedded into broader business operations, measured against environmental KPIs, and continuously improved over time.

The Top 10 AI Risks for UK Businesses
- Bias and discrimination – unfair outcomes from skewed data.
- Deepfakes and misinformation – realistic but false content damaging trust.
- Privacy breaches – violations of laws like GDPR.
- Lack of explainability – “black box” decisions that can’t be justified.
- Job displacement – replacing human roles without reskilling plans.
- Cybersecurity Threats: From Prompt Injection to Data Poisoning
- Runaway model updates – changes without proper oversight.
- Legal noncompliance – failing to meet laws like the EU AI Act.
- Overreliance and de-skilling – erosion of human expertise.
- Brand damage – reputational harm from unsafe AI outputs.
For UK businesses, the EU AI Act’s extraterritorial reach means compliance is crucial if selling AI-enabled products to the EU, processing EU residents’ data, or supplying AI components to EU firms.Annexes, Toolkits, and Integration
ISO/IEC 42001 includes:
- Annex A – a governance toolkit for defining responsibilities, setting explainability requirements, monitoring bias, and incident handling.
- Annex B – guidance for integrating AI governance with existing standards like ISO 27001, reducing duplication and streamlining compliance.

Certification Benefits
Achieving ISO/IEC 42001 certification demonstrates transparency, accountability, and ethical AI management across the lifecycle. It sets a global benchmark for trustworthy AI, ensuring systems are secure, fair, and reliable, while building public trust.Clause Overview – Embedding Governance
The standard’s clauses follow an AI system’s lifecycle:
- Context – understanding your AI’s purpose and stakeholders.
- Leadership – assigning top-level responsibility.
- Planning – risk assessment and objective setting.
- Support – ensuring training and resources.
- Operation – executing plans with full traceability.
- Performance evaluation – audits, KPIs, and reviews.
- Improvement – continuous refinement and adaptation.

Future of AI
As AI systems begin to operate with greater independence, the question of accountability becomes sharper. Who’s responsible when a self-learning system makes a decision on its own? We’re going to see growing pressure for frameworks that clearly define machine agency and human decision rights, ensuring we always know where the ultimate responsibility lies.Next, there’s a shift toward Dynamic and Real-Time Governance. Traditional compliance models were built for static systems, but AI is constantly updating and adapting. The future will see continuous oversight, not just annual audits , with AI ethics boards, real-time monitoring dashboards, and live audit trails that keep decision-making transparent and traceable. Then we have Model Transparency and Explainability. The demand for AI we can understand is only going to increase, especially in high-risk areas like healthcare, finance, and criminal justice. Tools like SHAP and LIME will become standard, and explainability won’t just be a best practice, it will likely become a legal requirement for many applications.
As discussed, on the regulatory side, we’re moving toward Global Convergence. Right now, we have a patchwork of laws, the EU AI Act, OECD principles, NIST frameworks, and others, but over time, there’s going to be more harmonisation. ISO/IEC 42001 is perfectly positioned to act as the bridge, giving organisations a single governance framework that works across borders. And finally, the human element, New Roles and Skills.
The future will see the rise of AI auditors, model validators, and ethics officers. Governance will become a core, cross-functional skillset, combining IT, legal, ethics, and operational expertise. This isn’t just a tech issue; it’s a team effort.The future of AI governance will be faster, more transparent, more global, and more human than ever before. And frameworks like ISO/IEC 42001 will be at the heart of making that future both innovative and safe

Final Reflection
AI is powerful, but it is still just a tool. Its impact, positive or negative, comes down to the decisions we make in developing, deploying, and managing it. Responsible governance through frameworks like ISO/IEC 42001 is essential. If you’re interested in taking the next step, NQA offers a free readiness checklist and consultations to help you prepare for ISO/IEC 42001.Want to speak to an expert about the ISO 42001 standard? Get in touch with our team today.
Learn more about the world's first standard for AI. Visit our ISO 42001 certification page.
