§ Service 02 / Operating model

Owned on
day one.

The operating model is the organisational chassis your AI sits inside. RACI. Review cadence. Evaluation lifecycle. Escalation paths. Model-risk management. Without it, every placement is an orphan. We design it in six to ten weeks, calibrated to your regulatory posture.

§ 01 / Why an operating model

Models are easy. Ownership is hard.

Shipping an AI model is a week of work. Governing it for three years is the actual product. Yet almost no enterprise AI programme starts with governance — it gets bolted on after audit, after incident, after a board question nobody can answer.

The Operating Model engagement is where we design the chassis your AI sits inside: who owns each placement, who approves changes, how evaluation runs, how drift gets detected, how incidents escalate, and how the whole thing interfaces with your existing model-risk, compliance, and data-governance functions.

Ungoverned AI is not a capability. It's a liability with a roadmap.

This is the engagement that turns three pilots into one owned capability. It's also the engagement that lets you go to your board and say: yes, we know what's in production; yes, we know who owns it; yes, we know how we'd turn it off.

Where this sits in DATS

Operating Model is Stage 03 of the Dilr AI Transformation System. It usually runs after a Placement Diagnostic (so we know what we're governing) and before an Execution Office (so the placements land into a chassis that already exists). Clients who skip it almost always come back for it.

§ 03 / What we design

Six modules. Each shippable on its own.

A full operating model is six designs, delivered as working artefacts your team can operate from. Clients often start with the three most pressing and graduate.

Module 01

Governance charter

The document that sits above everything: scope, principles, authority, escalation paths, and the line between AI governance and your existing frameworks.

Module 02

RACI matrix

For every placement: who is Responsible, Accountable, Consulted, Informed. Product owner. Model owner. Data owner. Risk owner. Named, not placeholder.

Module 03

Lifecycle + eval

How placements move from idea to sunset. Stage gates, approval criteria, eval framework, drift monitoring, and the turn-off protocol.

Module 04

Review cadence

The boards and rituals. Who meets, how often, what they see, what they can approve. Designed to cost your senior team 2 hours a month, not 2 days.

Module 05

Org design

Where the AI function sits: centralised, federated, or hub-and-spoke. Team composition, reporting lines, and the first three hires in priority order.

Module 06

Policy pack

Acceptable use, data handling, vendor management, third-party model risk, incident response, and customer-facing disclosure. Ready for audit.

§ 04 / Sample RACI

The RACI we most often ship.

Every client gets a tailored matrix. This is the shape of it: rows are decisions, columns are roles. We've removed proprietary columns and simplified for illustration.

DecisionProduct ownerModel ownerRisk ownerExec sponsorAudit
Approve new placementACCRI
Pass go-live evalCRAII
Retrain or update modelCRCII
Declare drift incidentCRAII
Turn placement offCCRAI
Annual audit sign-offICCAR

R = Responsible · A = Accountable · C = Consulted · I = Informed

§ 05 / Regulatory

Built for the regime you're actually in.

No two operating models are identical, because no two regulatory perimeters are identical. We calibrate every engagement to the frameworks that actually apply to you — not a generic checklist.

EU
EU AI Act
Risk tiers, high-risk obligations, transparency.
UK
PRA SS1/23 · FCA
Model risk management, SYSC, consumer duty.
US
NIST AI RMF
Govern, map, measure, manage. Sector overlays.
Sector
ISO 42001 · SOC 2
AI management systems, control attestations.
§ 06 / FAQ

Questions, answered.

We already have a model risk framework. Isn't this duplication?
No — we extend what you have. Most existing MRM frameworks are built for statistical and credit models. Generative AI breaks those assumptions. We bolt a gen-AI-native layer onto your existing framework rather than replacing it.
Can you run this without a diagnostic first?
Yes, if you already have a placement inventory you trust. Most clients don't — they have a list of pilots, not a map of placements. A short diagnostic often pays for itself here.
What's the smallest team this will work for?
A three-person AI function is the floor. Below that, the overhead of governance outweighs the benefit. We'll tell you honestly in the scoping call if it's too early.
Will this satisfy our auditors?
The deliverables are designed to be audit-ready out of the box, and we've had them accepted by Big 4 auditors and in-house audit teams at regulated firms. We don't pretend that's a guarantee for your specific audit — but it's the shape your auditor expects.
What if our AI spans multiple subsidiaries or jurisdictions?
Common case — we design a group-level charter with subsidiary-level adaptations. Typically adds two weeks to the engagement.

Ready to put ownership on your AI?