Compliance

ICO AI Code of Practice: Voice AI obligations from May 2026

ICO AI Code of Practice (SI 2026/425) takes effect 12 May 2026. What UK enterprise voice AI must now evidence: disclosure, explainability, bias, redress.

DILR.AI · COMPLIANCE ICO AI Code of Practice SI 2026/425 · in force 12 May 2026 · voice AI in scope 16 APR Made by Sec of State 21 APR Laid before Parliament 12 MAY In force SI 2026/425 binds ICO 29 MAY ADM consult closes draft guidance 2026–27 Code drafted statutory consult SOURCE · LEGISLATION.GOV.UK · UKSI 2026/425

On 12 May 2026, The Data Protection Act 2018 (Code of Practice on Artificial Intelligence and Automated Decision-Making) Regulations 2026 — SI 2026/425 — comes into force. Most enterprise AI vendors are talking about the EU AI Act. They are watching the wrong jurisdiction. For UK enterprises running AI voice agents that process personal data — and almost every voice deployment does — the more immediate question is what the Information Commissioner's Office is now statutorily compelled to publish.

This is not consultation noise. It is a final SI, signed by the Secretary of State on 16 April 2026, laid before Parliament on 21 April, and through every parliamentary stage. It binds the ICO to produce a new statutory Code of Practice on AI and automated decision-making under section 124A of the Data Protection Act 2018. Voice AI deployments — inbound triage, outbound collections, scheduling, KYC verification — sit squarely in scope because almost every call processes a name, a phone number, a voice pattern, or a decision that affects a data subject.

The commercial question is not "are we caught?" The commercial question is whether your voice AI estate can demonstrate the transparency, explainability, bias control, and rights-and-redress evidence the Code will require — when the regulator asks, and when an enterprise procurement team asks first.

This guide is shipped by the team behind Dilr Voice — enterprise voice AI live in 40+ countries, with Code-ready governance built in. For implementation help, see the DATS five-stage AI methodology.

Key takeaway

SI 2026/425 does not itself impose new operational rules on enterprises — it forces the ICO to publish a Code that does. UK voice AI deployments should not wait for the Code to land. The transparency, bias, and audit evidence the Code will demand is the same evidence enterprise procurement, the FCA, and the EU AI Act already demand. Build it once, satisfy four regulators.

The Regulations themselves are short. The five things UK voice AI operators need to know:

12 May
SI 2026/425 in force
88%
enterprises using AI (McKinsey 2025)
6%
capture material EBIT impact
£17.5M
UK GDPR max fine cap

The arithmetic for the board is straightforward: an audit failure that produces remedial costs, contract loss, and a regulatory finding is materially more expensive than building the evidence layer up-front. ~88% of enterprises now use AI (McKinsey State of AI 2025) but only ~6% capture material EBIT — almost always the ones who built governance into the deployment, not bolted it on after. The Code is being written in the gap between those two numbers.

What the Code will actually require of voice AI

The text of the Code has not been published. The ICO has now been given a hard statutory mandate to produce one, and its public March 2026 strategy update has already signalled the four pillars it will be built on: transparency and explainability, bias and discrimination, rights and redress, and processing children's personal data. The ADM consultation that closed 29 May 2026 telegraphed how the ICO will operationalise these for automated decisions — and a voice AI agent making, qualifying, or routing a decision is automated decision-making in the s.50C / Article 22C sense whenever the outcome materially affects the caller.

This is where voice AI is genuinely distinct from text-only systems. Three vectors the Code will scrutinise that voice operators routinely under-evidence:

1. Disclosure at call open

Voice is the only channel where the disclosure happens in real time, in audio, with no UI affordance for "I don't accept". The Code will expect a clear, audible, logged statement that the caller is speaking to an AI, the purpose, and the rights available — and an evidence trail per call ID showing it was delivered. This sits alongside the parallel obligation under EU AI Act Article 50 for any UK enterprise touching EU data subjects.

2. Explainability of automated outcomes

If the agent rejects an application, escalates a vulnerable customer, refuses a refund, or completes a KYC step, the Code will require the operator to explain — in plain English, on demand — what the system did and why. For voice that means tying the transcript, the model decision log, the tool calls, and the routing path together as one auditable record. Most voice stacks store these in three different systems.

3. Bias monitoring across accent, dialect, and demographic

ASR error rates vary by accent. Sentiment models trained on US data systematically misclassify British regional patterns. A 4-point gap in handle-rate between two postcodes is a discrimination signal the Code will treat as evidence of unfair processing — not a model quirk. Bias monitoring on a quarterly cadence, broken down by protected characteristic where lawfully held, becomes a controller obligation.

How to map the Code to a voice AI deployment

The Code has not yet been drafted, but the controls it will require are already visible in the ADM guidance, the ICO's 2026 strategy, the existing UK GDPR regime, and the Article 22 case law the ICO is consulting on. UK enterprises that align now will not need to retrofit later. Map every voice deployment against four control families — and the same evidence file satisfies the Code, the EU AI Act, FCA AI governance, and existing UK GDPR Article 22.

Code requirementVoice AI specific obligationEvidence requiredOwner
Transparency at decision pointAudible AI disclosure at call open + on transferPer-call disclosure log tied to call ID + waveform markerVoice operations
Explainability of outcomesPlain-English reason chain per decision the agent makesLinked transcript + model decision log + tool-call auditData + product
Bias and discrimination controlASR / sentiment performance across accent and demographic cutsQuarterly bias report, protected-characteristic split where lawfulCompliance + DS
Rights and redressSubject access request resolution within 30 days incl. audioSAR pipeline that exports audio, transcript, decision, retentionDPO
Children's personal dataAge-aware routing; no profiling of minors without lawful basisAge-gate at intake + DPIA addendum on minor-facing flowsDPO + product
Lawful basis and consentPECR + UK GDPR consent capture for outbound + recordingConsent capture architecture for AI voiceLegal + ops

What enterprise voice AI buyers should do before the Code lands

Three things separate the operators who will treat the Code as a 6-week scramble from those who will treat it as a procurement advantage:

Build the evidence layer once. The Code, the EU AI Act, FCA AI governance for regulated firms, and existing UK GDPR Article 22 obligations all require versions of the same artefacts: a per-call disclosure log, an explainable decision trail, a quarterly bias report, and a SAR pipeline that handles audio. A vendor that produces these natively is materially cheaper to operate than one that requires you to build them in your data warehouse. This is the difference an enterprise voice AI evaluation should surface in the first vendor call.

Treat children's data as a first-class flag, not a footnote. SI 2026/425 specifically requires the Code to address children's personal data. Any inbound flow that might surface a minor — education admissions, family-facing healthcare, claims intake — needs an age-aware routing rule and a DPIA addendum before the Code lands, not after.

Pre-build the audit pack. Procurement teams in regulated UK enterprises will start asking for "ICO Code readiness" the week the Code is published. The vendors who can answer the question on the first call win the deal. Vendors who need 30 days to prepare the answer lose the slot.

Want to go deeper? Try Dilr Voice live, book an AI placement diagnostic for compliance-led deployments, or read about our deployment methodology for regulated UK enterprises.

The Code itself will not be published the day SI 2026/425 comes into force. The ICO must consult the Secretary of State, the Equality and Human Rights Commission, and other prescribed bodies under section 124B before the Code is laid. Realistically, draft text will appear in late 2026 or 2027. But the controls it will codify are visible now in the ICO's March 2026 AI and biometrics strategy update, the ADM consultation, and the parallel obligations under existing law. The cost of acting now is small. The cost of waiting until the Code lands is a 90-day remediation programme run under regulatory pressure — and a procurement gap that lets compliance-ready competitors take the deal first.

UK enterprise voice AI also already sits inside a broader regulatory mesh — the enterprise AI voice agents guide covers how this layers with FCA, EU AI Act, and PECR obligations across a typical voice estate.

Service
AI Execution Office
Service
AI Placement Diagnostic
Product
Dilr Voice
Talk to the operators

Be ICO-ready before the Code lands.

30-min scoping call · No deck · Confidential. We map your voice estate against the four Code pillars and tell you exactly where the gaps are.

Written by the Dilr.ai engineering team — practitioners who ship enterprise AI in production. Follow us on LinkedIn for shipping notes, or subscribe via the RSS feed.

ICO AI Code of Practicevoice AI compliance UKSI 2026/425automated decision making AI UKcomplianceUK GDPR voice AIenterprise AI governance

Related articles

← Previous
AI voice fintech collections: ROI and FCA compliance

One email, once a month. No hype. Just what we learned shipping.