Not this
A vendor-managed subscription pod.
Offshore platforms sell “AI pods as a service” — a monthly headcount subscription you renew indefinitely. We are an outcomes-engineered engagement that ends in 6 months with handoff.
The Big-Five consulting model — 20 people, 6 months, £1.5m+ — was built when engineering output scaled linearly with headcount. That math doesn't survive AI. The pod is the operating model that replaces it.
Disambiguation
“AI pod” has become a vendor category — and most of what shows up under that name isn't this. Three patterns we explicitly are not:
Not this
Offshore platforms sell “AI pods as a service” — a monthly headcount subscription you renew indefinitely. We are an outcomes-engineered engagement that ends in 6 months with handoff.
Not this
“AI” bolted onto the same pyramidal team that produced 2010s offshore delivery. We are senior-only, no juniors — AI is what replaces the pyramid, not the cover for it.
Not this
Three engineers each using Copilot is a 30% productivity lift on individual coding. An AI-augmented pod is a different operating model. The leverage is 5–10× per senior, not 30% per task.
The math
Not theoretical. This is the engagement shape we have run for the past two years across QA modernization, AI integration, platform reliability, and delivery acceleration.
| Dimension | 20-person traditional | 3-person AI-augmented pod |
|---|---|---|
| Team shape | 2 architects, 6 mid-level, 8 junior, 2 PMs, 2 BAs | 3 senior engineers. No juniors. No PMs. |
| Monthly cost | £150,000–£280,000 | £20,000–£40,000 |
| 6-month total | £900,000–£1,700,000 | £120,000–£240,000 |
| Time to first delivery | 4–8 weeks (kickoff + ramp) | 1–2 weeks. No ramp. |
| Knowledge transfer | Discrete deliverable at the end | Continuous, embedded throughout |
| Surface area shipped | Comparable | Comparable |
How the pod operates
That description is shallow and wrong. The leverage comes from the operating model around senior engineers, not from each engineer being 30% faster.
Retrieval-augmented generation over your repos, runbooks, and ADRs. The senior engineer never starts from a blank context — they start with the system's history loaded.
Generating the boring 70% of a test suite (table-driven cases, edge enumeration, contract drift detection) so the senior writes only the interesting 30%.
Every pull request reviewed by both a senior pod member and an LLM trained on the team's design patterns. Drift gets caught before merge, not in retrospective.
Runbooks, ADRs, and handoff docs written continuously, not as a deliverable at the end. The pod's institutional knowledge accumulates as code does.
The leverage is not 30% per engineer. It is 5–10× per senior. When three seniors operate this way, a 90-day engagement produces roughly the deliverable surface area of a 6-month, 12-person traditional engagement.
The first 90 days
Weeks 1–2
Codebase access, internal LLM tooling deployed, system mapped. By the end of week 2, the pod knows your system at the level a Big-Five engagement reaches at week 8.
Weeks 3–6
The defined goal ships — modernization, integration, platform stand-up, cost reduction. Continuous delivery, daily merges, weekly demos. Your team is in every PR review.
Weeks 7–10
Pod's role shifts from doing to enabling. Your team takes ownership of new patterns. Pod members move to advisory.
Weeks 11–12
Runbooks, ADRs, and AI tooling configuration handed off. Pod leaves. Your team operates independently.
Honest about fit
Common questions
A small team of senior engineers (typically three) operating with internal LLM tooling — RAG over client codebases, test scaffolding agents, architectural review agents, and continuous documentation. The combination delivers comparable output to a 15–20 person traditional consulting engagement at a fraction of the cost.
£20,000–£40,000 per month, depending on the engagement shape. A typical 6-month engagement totals £120k–£240k. A traditional 20-person consulting equivalent runs £150k–£280k per month, totalling £900k–£1.7m over six months — roughly 6× the cost for comparable output.
Copilot is incremental productivity for individual engineers (~15–30% lift on coding tasks). An AI-augmented pod is a different operating model: LLM tooling tuned to the client's codebase, test scaffolding agents, architectural review agents, and continuous documentation built into the engagement. Leverage per senior engineer is 5–10×, not 30%.
Weeks 1–2: install (codebase access, AI tooling deployment, system mapping). Weeks 3–6: the heavy lift (modernization, integration, or platform stand-up ships with continuous delivery and weekly demos). Weeks 7–10: hand-off (your team takes ownership). Weeks 11–12: exit (runbooks, ADRs, and AI tooling configuration delivered).
Pods don't work for pure staff-augmentation (many hands on well-defined tasks), greenfield exploratory R&D (the unknown is what to build, not how), or highly siloed organisations where every merge requires committee approval. The pod model relies on direct engineering relationships and short feedback loops to deliver its leverage.
Engage a pod
We scope every engagement in a single conversation. No procurement deck, no partner glad-handing. Three senior engineers, AI-augmented delivery, fixed scope, measurable outcome.