Skip to main content
How to build meaningful human oversight into AI systems
Share on socials
A photo of Gemma McKenzie next to the title How to build meaningful human oversight into AI systems
A photo of Gemma McKenzie, Marketing Operations Lead
Gemma McKenzie
Published on 15 May 2026

How to build meaningful human oversight into AI systems

AI oversight is more than human sign-off. Build systems with real judgment, accountability, and learning. How does your organisation handle AI overrides today?
AI is attractive for obvious reasons: ever-efficient, ever-growing, and ever-productive. What business doesn't want this? That helps explain why adoption is moving so quickly. McKinsey, for example, has reported that around 88% of organisations worldwide are using or exploring AI in at least one business function.
As Stephen Almond, Vice President of Policy and Consulting at the Centre for Information Policy Leadership, puts it: "It's very easy to dive straight into risk, but we should remember why people want to use AI: it makes their lives more efficient, improves productivity, and enables scale."
That is an important place to begin. The question is not whether AI can create value. It can. The question is whether governance can keep up with how AI is actually being used within organisations.
Because in many organisations, human oversight exists more as a form of reassurance than as a form of control. The act of a colleague reviewing and approving demonstrates a tacit accountability that we have all historically understood. However, if your colleague does not understand or have the context for an AI-supported output, how can they challenge or change it? Yes, there is a human review, but the risk remains unchanged. Now that their understanding has changed, has their accountability changed, too?

Why human oversight in AI often fails in practice

The phrase 'human in the loop' is often used in AI governance. It suggests that people remain firmly in control of automated systems and, therefore, accountable.
However, the phrase can hide poor design and disguise real risk.
If a person is expected to review hundreds of AI-assisted decisions in an hour, they are unlikely to be exercising meaningful judgment. If they do not have access to the right context, cannot verify the data behind a recommendation, or are not empowered to escalate concerns, their role becomes procedural rather than practical.
That is the distinction organisations need to focus on.

Is a human there to make a decision, or simply to legitimise one?

In high-stakes settings, where decisions affect people, sensitive data, or important outcomes, this matters most. In those contexts, weak oversight not only creates regulatory exposure but also undermines oversight effectiveness. It also creates operational and ethical risks.

Stop treating AI as exceptional.

One of the most useful ways to cut through the hype is to stop treating AI as something entirely separate from the rest of your governance environment.
As Camilla Winlow, Global Head of Data Protection at Talan UK, puts it: "AI is the same as what you should be doing anyway, but more so."
In practice, AI often exaggerates existing weaknesses:
  • unclear ownership
  • weak data quality controls
  • inconsistent escalation routes
  • poor operational visibility
  • overstretched teams
  • fragmented accountability
AI will expose issues that already exist faster and on a greater scale.
That is why effective AI governance rarely starts with the model alone. It starts with the systems, decisions, and responsibilities around it.

Human in the loop and human on the loop are not the same

When people talk about human oversight in AI, they often collapse very different governance models into one idea.

Human in the loop

A human-in-the-loop model requires a person to approve or take action on a decision before the system can proceed.
While this sounds reasonable, in practice, it often falls down and fails to be meaningful. For example, if the process is too fast, too repetitive, or too opaque, the human becomes part of the machine rather than a control on it.

Human on the loop

Whereas a human-on-the-loop model allows the system to act autonomously while a person monitors outcomes and intervenes when needed, it is increasingly relevant as organisations explore agentic AI. But it also introduces harder questions about traceability and ownership.
As William Malcolm, Executive Director for Regulatory Risk at the ICO, notes: 'Agentic AI raises hard questions about where automated decisions actually happen, how we allocate accountability, and how we respect purpose limitation across the value chain.'
If a system operates across multiple tools, teams, and suppliers, can you still identify where a decision was made, how it was made, and who is responsible for it?
If not, the oversight model may no longer match the system you are deploying.

Use human overrides to improve the system.

One of the most overlooked opportunities in AI governance is the value of disagreement.
When a person overrides an AI-supported recommendation, that moment should not be treated as an exception to file away. It is a signal worth learning from.
Camilla Winlow, Global Head of Data Protection, Talan UK, captures this clearly: 'Every overturned AI decision implies the model could be tuned better. Oversight isn't just allowing humans to override; it's using that signal to improve the system.'
That is where real oversight becomes more than a compliance measure.
A strong governance process should capture:
  • when overrides happen
  • why they happen
  • whether patterns are emerging
  • what those patterns reveal about the model, the workflow, or the underlying data
If override data is ignored, the same problems are likely to surface again. But if organisations pay attention to when people step in and use that insight to adjust the model, update policies, rethink workflows, or better support reviewers, oversight matures into a mechanism for building more resilient, high-performing systems rather than just a reactive check.

Don't assume the DPO can carry AI governance alone

Many organisations instinctively route AI governance through the Data Protection Officer. Privacy is important, and data protection law remains highly relevant.
But it is only one lens.
As Camilla Winlow, Global Head of Data Protection, Talan UK, warns: 'Don't assume your data protection officer will be your AI governance person. Data protection is one lens; AI governance is much broader.'
That broader picture can include:
  • legal compliance
  • technical assurance
  • operational resilience
  • security
  • model performance
  • ethical alignment
  • accountability across teams and suppliers
William Malcolm, Executive Director for Regulatory Risk at the ICO, makes a related point: 'These are not new challenges. Data protection law is designed to be technology-neutral.'
That should reassure organisations that they do not need to start from zero. But it should also remind them that privacy expertise alone will not answer every governance question raised by AI.
Real oversight is multidisciplinary by design.

Move at pace, but with purpose

Organisations are under pressure to move quickly. Some are responding with caution. Others are accelerating experimentation. Most are trying to do both at once.
The reality is that AI use is already happening inside your business, whether formally approved or not.
As Stephen Almond puts it: 'You can't just sit back and block this stuff and then switch it on later. Your employees are already using AI—possibly in ways you don't see'’
That is why waiting for perfect governance is not a viable strategy. The better approach is to move at a pace, but with purpose.
William Malcolm puts it well: 'Organisations need to move at pace, but also with purpose—AI deployment shouldn't just be a rush to keep up with competitors.'
In practice, that means designing oversight into the workflow from the start:
  • Define where human intervention is required
  • Make sure reviewers have the time and information they need
  • Create clear escalation paths
  • Assign accountability across the decision chain
  • Log, analyse, and learn from overrides
  • Revisit controls as the system evolves

The question to ask before you scale

Before you scale any AI-supported process, ask one simple question:
If a human disagrees with the output, what happens next?
If the answer is unclear, inconsistent, or dependent on informal workarounds, your oversight model needs more work.
Meaningful human oversight ensures a person can genuinely intervene and that their actions improve the system over time. Without that functional impact, the process is just paperwork.

Final thought

AI governance is often framed as a balance between innovation and control. But the best governance provides a clear structure for progress. The value of AI is real, but so is the risk of deploying it faster than your team's accountability model can support. To make human oversight work, you must design for more than just presence. Design for judgment, authority, traceability, and learning.
That is how you move beyond nominal oversight and build a system where accountability is both a policy and a competitive advantage.
How does your organisation currently capture, review, and act on the moments when a human overrides an AI-supported decision?

Continuing conversations

Experts from across The Aaptavist Group share insights on the trends shaping how we work today, and what’s coming next. This blog was inspired by a talk delivered at OxGen 2025. You can watch the talk here.
Want more perspectives, event highlights, and future-focused insights? Follow The Adaptavist Group on LinkedIn to stay up to date with our latest thinking and where we’ll be next.