Where AI belongs
— and where it doesn't.
A four-to-six week diagnostic that maps your value chain, inspects your stack, scores every candidate placement, and hands back a ranked, sequenced roadmap. Built for organisations that already know AI is a priority — and need clarity on what to do Monday.
Most AI work fails in placement.
The most expensive mistake in enterprise AI isn't picking the wrong model. It's picking the wrong placement — the wrong part of the business, the wrong moment in a workflow, the wrong integration point. A diagnostic is cheaper, faster, and more honest than another pilot.
We've sat in dozens of AI steering committees where the question wasn't “can we do this?” — it was “where should it go, and who owns it?” Teams burn quarters answering the first question with proofs of concept. They burn years avoiding the second.
The cost of a bad placement is not a failed pilot. It's the eighteen months of organisational trust that quietly evaporate around it.
The Placement Diagnostic is our answer to that. Six weeks, fixed scope, ends with a roadmap your CTO and your COO can both sign.
What the diagnostic is not
It isn't a maturity assessment. It isn't a RAG score against a pre-printed framework. It isn't a tool-selection exercise. And it isn't a list of 40 use-cases ranked by speculative ROI — which is what every enterprise has already done, twice.
What happens across six weeks.
Weeks 5 and 6 only run if the engagement is scoped at six; four-week engagements compress those into a single sprint. Either way, you get a roadmap in your hand on the final day.
Four artefacts. All working documents.
Not one of these is a PowerPoint. Everything we produce is designed for a working team to build against, debate, revise, and own. You also keep the interview corpus and scoring workbook — raw materials, not opinions.
Placement map
A visual model of your value chain with every candidate AI placement mapped onto it — showing where the integration point is, which system owns it, and what data feeds it.
Ranked roadmap
Three placements sequenced for the next 12 months, with dependencies, team requirements, go-live criteria, and the named owner on your side for each.
Feasibility scorecard
Every placement we considered, scored transparently on feasibility, data readiness, org readiness, and value. The workbook is yours — reuse it next year.
The don't-do list
The placements we recommend you kill or defer, with reasoning. This is often the most valuable artefact — and the one your CFO will read first.
A retail bank had forty AI ideas. We gave them three — and the list of thirty-seven reasons why.
When we started, three separate teams were piloting generative AI against the same customer-service workflow. None of them had integrations with the system of record. None of them had an owner. The head of risk hadn't been in the room. Six weeks later: one placement, live in one team, with an eval harness, an owner, and a quarterly review board — plus a defensible case for retiring the other two.
Composite · details anonymised · representative of engagements
Six stakeholders. Two hours a week.
This is a light-touch engagement by design. We'd rather interview six people thoroughly than sixty people superficially.
- An executive sponsor — someone who can say “yes” to a placement and “no” to its alternatives.
- A head of operations — who knows where the real friction is in the business.
- A head of engineering or platform — who can say what's integratable and what's not.
- A head of risk, compliance, or legal — regulatory scan is not optional.
- A finance partner — to validate the value side of the scorecard.
- An end-user champion — someone in the workflow that will be affected.
Beyond that, expect one working session with the combined group per week and a final readout. We handle the rest.