Compliance

FCA AI Governance 2026: What Voice AI Deployments Must Do

FCA AI governance for voice AI: from 1 September 2026, UK voice deployments fall under SM&CR, Consumer Duty and the new Code of Conduct. Get audit-ready.

DILR.AI · COMPLIANCE FCA AI Governance 2026 What voice AI deployments must do before 1 September. FEB 2026 Regulatory Priorities replace portfolio letters MAR 2026 Retail banking + consumer investments APR 2026 AI Update + off-channel guidance 1 SEP 2026 Code of Conduct extension live SM&CR · CONSUMER DUTY · SYSC · OFF-CHANNEL · CODE OF CONDUCT

The FCA spent the first quarter of 2026 quietly redrawing the perimeter for AI in financial services. Most firms read the Regulatory Priorities reports as governance refreshes. They are not. They are the supervisor's first formal articulation of what "good" looks like for AI-mediated customer interactions — and from 1 September 2026, the Code of Conduct extension brings AI-assisted communications squarely inside the scope of conduct enforcement.

For voice AI specifically, this is the inflection. A voice agent handling a regulated customer call is now a Senior Manager's accountability surface, a Consumer Duty outcome generator, and a Code of Conduct artefact — all at once. The compliance question is no longer "can we deploy?" It is "can we evidence that the deployment meets four overlapping regimes simultaneously?"

This guide sets out exactly what the FCA expects from voice AI deployments in 2026, where the gaps usually open up, and how to build a governance stack that survives a Section 166 review.

This guide is shipped by the team behind Dilr Voice — enterprise voice AI live in 40+ countries with FCA-aligned governance controls. Or see DATS, our 5-stage AI consulting system used by regulated UK firms.

Key takeaway

From 1 September 2026, an AI voice agent that mishandles a vulnerable customer call is no longer just a customer service failure. It is a Code of Conduct breach, a Consumer Duty outcome failure, and a SYSC governance gap — attributable to a named Senior Manager.

  • 78% of executives say they could not pass an AI governance audit in 90 days (Grant Thornton, 2026).
  • Only 1 in 5 firms has a mature governance model for autonomous AI agents.
  • The FCA expects firms to evidence outcomes, not activity — call volume is not a metric.

The shift matters because UK financial services has been the most aggressive early adopter of voice AI. Collections, KYC verification, appointment scheduling, claims intake and SDR-style outbound have all been automated at scale across the FTSE 350 since 2024. Most of those deployments were built for operational efficiency, not regulatory durability. They will need retrofitting before September. For the underlying economics that make voice AI worth defending in front of a regulator, see our breakdown of AI voice cost per call — payback typically lands inside 90 days, which is why budget owners keep pushing through compliance friction rather than around it. Worth understanding what the friction actually consists of.

1 Sep
Code of Conduct extension live
78%
Cannot pass an AI governance audit in 90 days
63%
AI breaches at firms with no governance policy
6%
Of enterprises are AI-mature (McKinsey, Nov 2025)

What the FCA actually expects from voice AI in 2026

The FCA's 2026 position is principles-based, but it now has teeth from four overlapping regimes. Each maps to specific voice AI control requirements, and each has a different evidentiary burden. A deployment that passes one will not necessarily pass another. The April 2026 AI Update and the sector Regulatory Priorities make the layering explicit — there is no single "AI rule" to comply with.

The four overlapping regimes

SM&CR (Senior Managers and Certification Regime). A named Senior Manager must own AI system performance and compliance before deployment. For voice AI, this typically lands with the SMF in charge of customer outcomes — Head of Retail, COO, or Chief Customer Officer. The accountability cannot be delegated to a vendor and cannot sit only with technology functions.

Consumer Duty. The four outcomes (products and services, price and value, consumer understanding, consumer support) all bite on voice AI. Consumer understanding is the sharpest test: a voice agent must communicate clearly enough that a retail customer can act on the information and make decisions in their interests. Vulnerable customer detection is non-optional.

SYSC (Systems and Controls sourcebook). SYSC requires proportionate governance, risk management, and audit. For voice AI this means model risk management, third-party oversight (your voice provider, your LLM provider, your telephony provider), and continuous monitoring with documented thresholds.

Code of Conduct (from 1 September 2026). The extension brings AI-assisted communications inside conduct rules. Harassment, bullying, and discriminatory behaviour expressed through AI-mediated channels — including voice agents — are now in scope. This includes off-channel and informal interactions, which the FCA's April guidance flagged specifically.

Where deployments typically fail

The pattern is consistent across the firms we audit. Voice agents are configured for happy-path performance and lightly tested for compliance edge cases. Three failure modes recur:

  1. No defined "good outcome" before launch. Firms measure handle time, containment, and CSAT. None of these are Consumer Duty metrics. Without a documented outcome definition (query resolved, customer confirmed understanding, no complaint within 30 days), the firm cannot evidence that the agent delivers the four outcomes.
  2. Vulnerability detection is keyword-based or absent. Most voice agents flag distress through static keyword lists. The FCA expects sentiment-aware, context-sensitive detection with a clear escalation path to a human within seconds, not minutes.
  3. Off-channel logging is incomplete. The FCA's April 2026 communications guidance requires firms to capture AI-assisted interactions across cloud calling, messaging, and notetaking. Voice AI deployments often store transcripts but not the model prompts, version, or decision rationale — which is what supervisors will ask for.

For the consent architecture that sits underneath all of this, our guide on consent capture in AI voice calls covers GDPR and PECR mechanics that interact directly with the FCA's customer understanding outcome.

Building the governance stack that survives supervision

The architecture below is the minimum viable structure for a voice AI deployment in an FCA-regulated firm in 2026. It is not the maximum. It is what supervisors will reasonably expect to see if they ask — and given the BCLP analysis of 2026 priorities, they will ask.

Pre-deployment, in-flight, and audit controls

Every voice AI deployment in scope of FCA supervision should evidence three control layers. Pre-deployment establishes that the system is fit to launch. In-flight confirms it stays fit. Audit demonstrates accountability after the fact. The table below maps the controls to the regulatory regime they primarily satisfy and to what changes after September 2026.

Control areaPre-September 2026Post-September 2026Primary regime
Senior Manager accountabilityImplicit, often technology-ownedNamed SMF, documented in MRT registerSM&CR
Outcome definitionOperational KPIs (AHT, containment)Consumer Duty outcome metrics with 30-day windowsConsumer Duty
Vulnerability detectionKeyword-based or absentSentiment + context, sub-10s human escalationConsumer Duty + Code of Conduct
Model + prompt versioningOften unloggedVersioned, immutable, queryable per callSYSC
Off-channel loggingTranscripts onlyTranscripts + prompts + model version + decision rationaleSYSC + Code of Conduct
Third-party oversightVendor SOC 2 letter on fileDocumented model risk on every model in the chainSYSC
Conduct rule applicationOut of scopeIn scope for AI-mediated communicationsCode of Conduct

The post-September column is what your evidence pack needs to look like. If you cannot produce a sample call file with the prompt version, model version, sentiment trace, vulnerability flag history, decision rationale, and outcome attribution within 24 hours of a supervisor request, the deployment is not yet ready.

The contrarian point most firms miss

The widespread assumption is that the FCA will treat voice AI more harshly than human agents because it is novel. The actual signal in the 2026 priorities is the opposite: the FCA expects voice AI to deliver better Consumer Duty outcomes than human agents, not equivalent ones. The regulator's view — implicit in the AI Update and reinforced across the 2026 sector priorities — is that AI removes excuses. A human agent might miss a vulnerability cue on a Friday afternoon; an AI agent that misses one is a systems failure, not a human one. Firms should set internal benchmarks above the human baseline, not at parity, and evidence the delta.

The same logic applies to the broader EU regime — see our guide on EU AI Act voice AI obligations for how Article 50 disclosure requirements interact with FCA Consumer Duty consumer understanding.

Want the full compliance picture for a UK enterprise voice AI deployment? Read our EU data residency voice AI guide, our breakdown of voice biometric data security obligations, or the enterprise voice AI vendor evaluation criteria we use with FCA-regulated buyers.

Service
AI Placement Diagnostic
Product
Dilr Voice
Guide
Agentic voice AI
Talk to the operators

Get FCA-ready before 1 September.

30-min scoping call · No deck · Confidential. We'll map your voice AI deployment to SM&CR, Consumer Duty, SYSC and the new Code of Conduct — and tell you what your evidence pack is missing.

Written by the Dilr.ai engineering team — practitioners who ship enterprise AI in production for FCA-regulated firms. Follow us on LinkedIn for shipping notes, or subscribe via the RSS feed.

FCA AI governance voice AI 2026FCA Code of Conduct AIFCA Consumer Duty AISM&CR voice AI compliancecomplianceFCA SYSC AIfinancial services voice AI

Related articles

← Previous
AI voice outbound enterprise sales: lead follow-up at scale

One email, once a month. No hype. Just what we learned shipping.