What this is — and isn't
This is the economics companion to our Anatomy of a working AI SDR piece. The earlier piece is about architecture. This one is about money: what payback looks like when the architecture is right, and what variables flip the curve from "obvious win" to "obvious kill."
Engagement A: Paid back in 3 weeks
B2B SaaS client, 6-person sales team, $4.8M ARR, ACV ~$28K, sales cycle 6 weeks. Their existing outbound was three SDRs working LinkedIn + email, generating ~120 sourced meetings/quarter.
What we built: a Clay-orchestrated researcher that scored target accounts, drafted personalised outbound emails, and queued them for SDR approval before sending. Sequence handoffs into Apollo for deliverability. Replies routed to the existing AEs.
Outcome: Q1 after deployment, sourced meetings rose from 120 to 198 with the same 3 SDRs. Build cost: $42K. Pipeline added (per their close-rate baseline): 24 incremental meetings × ~12% close × $28K ACV = $80K of bookings in the quarter, plus accelerated pipeline that compounded.
Payback: 3 weeks of measured pipeline.
Why it worked: the team already had a working outbound motion. The AI added throughput on top of it. The variables that mattered: good ICP definition, working CRM data, AEs who were already good at converting meetings, a sales cycle short enough that payback was visible inside one quarter.
Engagement B: Paid back in 9 months
Mid-market services firm, ~$18M revenue, ACV mid-six-figures, sales cycle 11 months. Owner-led sales — no SDRs, just the founder doing all the outbound himself.
What we built: a similar shape to Engagement A but tuned for the lower volume / higher value. Researcher + drafter + send queue. Founder reviewed every email before send.
Outcome: outbound volume tripled (founder went from sending 40 emails/week to a comfortable 130). Replies per send held flat. First closed deal sourced from the new flow took 7 months (consistent with the existing sales cycle). By month 9 the pipeline had compounded enough to clearly cover the build cost.
Payback: 9 months of pipeline-to-revenue.
Why it took longer: the sales cycle. Long-cycle B2B is a real structural delay between "AI SDR helped" and "money in the bank." The math still works, but the client has to be willing to wait, and most aren't. We now warn every long-cycle client up front.
Engagement C: Killed at week 6
Early-stage startup, $400K ARR, ACV ~$1,200/yr, sales cycle 2 weeks. Founder wanted "AI SDR to handle our entire top of funnel." We took the engagement on a clear pilot scope.
What broke: the unit economics. At $1,200 ACV, even a 100% conversion rate on every meeting they could ever book wouldn't cover an SDR — AI or human — at any reasonable cost. The right motion at that ACV is product-led growth, not outbound. We told them this in the kickoff. They wanted to test anyway. We agreed to a 6-week pilot with explicit kill criteria.
Six weeks later: 14 meetings sourced, 3 closed, ~$3,600 of bookings against a build that would have cost ~$30K to take to production-quality.
Payback: never, given the ACV.
We killed it. Refunded the back half of the engagement. Sent them a write-up of what to do instead — a self-serve flow with onboarding nudges, an in-product upgrade path, and a single human SDR for enterprise leads only.
Why we took it knowing the ACV was wrong: founder wanted us to try. We agreed because the kill criteria were clean and a negative result is also a delivered result. We'd do the same again — but only with kill criteria, never on an open-ended retainer.
The four variables that actually move payback
Across these three and the others we've shipped, the payback curve is set by four numbers. In rough order of impact:
- ACV. Below ~$5K ACV, AI SDR rarely justifies itself. The model spend, pipeline tooling, and human reviewer time don't fit under the deal-size envelope. Above ~$30K ACV, the math is forgiving.
- Sales cycle. Determines how long until the AI SDR's contribution shows up in revenue. 4-6 week cycles → fast feedback. 9-month cycles → patience required.
- Existing pipeline quality. AI SDR amplifies the underlying conversion math. If your AEs convert 5% of meetings, AI SDR doubles that to 10% of more meetings. If they convert 1% of meetings, AI SDR doubles that to 2% of more meetings — which usually doesn't pay back.
- Reviewer / approval bottleneck. The economics break if the human reviewing AI-drafted emails becomes the bottleneck. We design the queue + approval flow specifically to keep human time per email at <30 seconds. When that creeps to 2 minutes, productivity gains halve.
The kill criteria we set on every engagement
After Engagement C, we now set explicit kill criteria with the client up front. Specifically:
- Meetings sourced per week — target by week 4, kill threshold by week 8.
- Reply rate — target band, anything below floor by week 4 means scope or message has to change.
- Reviewer time per email — must stay under 30 seconds by week 6 or queueing logic gets re-cut.
- Pipeline-to-bookings (cycle-adjusted) — measured at the cycle-length checkpoint, not earlier.
If two of those four miss, we have a frank conversation about whether to continue. About 1 in 5 engagements ends here. The other 4 keep going and pay back.
The ROI calc, briefly
Our AI SDR vs human calculator lets you plug in ACV, cycle, and meetings/week to see where the curve crosses. The honest version of the answer: below ACV $5K it almost never crosses. Between $5K and $30K it depends mostly on cycle and existing conversion rates. Above $30K it almost always crosses, fast.
The summary
AI SDR is one of the highest-ROI plays in B2B AI in 2026 — when the economics support it. The most common mistake we see is teams pattern-matching on the success stories without checking whether their ACV and cycle are in the win zone. If you're under $5K ACV, the right move is product-led growth, not outbound. If you're at $30K+ ACV with a working sales cycle, this is the easiest yes we shop in our service line.
For the architecture side of the same story, see Anatomy of a working AI SDR.