PLAYBOOK

Anatomy of a working AI SDR

Most AI SDRs we've seen in the wild fail in the same three ways. Here's the architecture we use that doesn't — and the realistic budget you need to ship one.

READ · 10 MIN UPDATED · 2026-03-18 BY · PINTOED AI STUDIO

The three failure modes

1. Generic outreach at scale

The majority of "AI SDR" tools on the market are sequence runners with an LLM stitched in to vary the wording. The contact gets a mail-merge email that sounds slightly different than yesterday's mail-merge email. Reply rates collapse fast — recipients can smell the volume, and so can spam filters. After three months the sender domain is in the bin.

2. Hallucinated personalisation

The fancier tools attempt per-prospect personalisation by feeding the model a LinkedIn profile or company website and asking for "an observation." Without grounding, the model invents things — it compliments a feature that doesn't exist, references a project the person didn't work on, or fabricates company context. One bad hallucination per 50 sends destroys the campaign and burns the list.

3. No human escape valve

The agent treats every reply as a sales objection to handle. When a prospect asks a substantive question, the model responds with "great question, are you free for a 15-minute call?" — which is what kills the conversation. The good intent dies because there's no way for the agent to know it should hand off.

What we build instead

Architecture

Three layers, plus humans:

  1. Targeting layer. An ICP-aware enrichment pipeline (we use Clay + a custom Claude waterfall) that produces a tight list of accounts and named contacts. Lower volume than typical outbound — ~200 per SDR-week, not 5,000.
  2. Research layer. Per-prospect, the system pulls grounded context from real sources: recent press releases, the prospect's LinkedIn posts (if accessible), the company's funding history, public job postings. Outputs a 3-bullet "observation pack" with citations.
  3. Outreach layer. An LLM (we default to Claude Sonnet) drafts the first email using the observation pack as the only personalisation source. The draft goes to a human SDR for ≤30 seconds of review before sending — they're checking for cringe, not editing.
  4. Reply handling. Replies route by intent: positive replies hit a human immediately, neutral replies get a sequenced follow-up, negative replies trigger a polite unsubscribe. The model never tries to handle a substantive reply alone.

What "working" looks like

What it costs

For one full-time SDR equivalent worth of pipeline:

Common asks we say no to

Cross-references

Building an AI SDR? We've shipped these. Skip our scars.

SCOPE A BUILD → RUN THE NUMBERS →