PLAYBOOK

When NOT to build with AI

Three real client conversations where we said "don't build this with AI" — and one where we said "build it without AI first." The pattern that connects them.

READ · 6 MIN UPDATED · 2026-04-01 BY · PINTOED AI STUDIO

Engagement #1: The "AI customer service" spec

Mid-market e-commerce client, ~$40M/yr revenue, wanted "an AI agent that handles all customer service." We dug in. Their actual ticket volume was 800/month. 60% of those were three repeating questions: "where's my order?", "how do I return this?", "what's your promo?". The rest were edge cases that needed a human.

The right answer wasn't an AI agent. The right answer was a better-designed help centre with a search bar that worked, three clear answers to those three questions, and a Shopify-native "where's my order" widget. We pointed them at a $3K Shopify-app bundle that solved 80% of their volume in a week. They kept the remaining 20% with a human who now had time to do it well.

Engagement #2: The "AI scoring" platform

B2B SaaS client wanted to ship "AI lead scoring" as a marketed feature on their platform. They had no data — they were a year-old startup with 200 customers, no labelled outcomes, no consistent schema across customers' use of the product.

The right answer wasn't AI. It was a rules-based scoring engine built in two weeks, deployed to all customers, and explicitly framed in the UI as "your custom scoring rules." Once they had a year of usage data — actual conversion outcomes labelled against those scores — we could revisit. They shipped the rules engine. It still ships their scoring product today, fifteen months later.

Engagement #3: The "AI internal search"

Enterprise client wanted "AI-powered semantic search across all our Confluence + Slack + Google Drive." We asked: how does your team currently search? They said: "we don't, we ask each other on Slack."

The problem wasn't search quality. The problem was that nobody curated knowledge — most useful answers lived in Slack threads that were 6 months old and unindexed. AI search wouldn't fix that; it would surface stale outdated nonsense at higher precision.

The right answer was a documentation initiative — pick the 50 most asked questions, write actual answers, put them in one place. Three months later they could revisit AI on top of a corpus that was actually worth searching. They picked someone else for the AI build; we sent them a list of doc-ops contractors instead.

The pattern

All three engagements share the same shape: the team pattern-matched on AI as the answer before diagnosing the problem. AI fits a specific class of problem — fuzzy input, contextual reasoning, output that needs to flex per request. It's a poor fit when:

Asking "should we use AI for this?" before "what is this?" gets you into trouble fast. The AI build checklist is partly designed to surface this earlier — if the first three questions don't have clear answers, you don't have an AI problem yet.

The bonus engagement we did take

A client wanted "AI for sales call analysis." We said: "build sales call analysis without AI first — get the recording pipeline working, the storage schema settled, the dashboards your team actually uses." They shipped that in three weeks. Then we layered Claude-driven summarisation and theme extraction on top, which took another two weeks. The non-AI foundation was the load-bearing part. Without it, the AI features would have been demo-ware.

Not sure if AI is the right tool? We'll give you a straight answer.

BOOK A SCOPING CALL → SEE SERVICES →