What "deflection rate" actually counts
The standard definition: a ticket is "deflected" if it was handled by AI without escalating to a human. The number you see on the AI vendor's dashboard is "what percentage of incoming tickets did the AI handle, end-to-end, without involving a human?"
That number does not measure whether the customer's problem was solved. It measures whether the customer gave up.
Why this matters
A customer who asks "where's my refund?" gets an AI reply. The AI says: "Refunds typically take 5-7 business days. Please check back if you don't see it." The customer doesn't reply. The ticket closes. Deflected.
Two weeks later that customer files a chargeback. Or churns. Or writes a Reddit post. Or never buys from you again. The AI vendor's dashboard still says "deflected." Your business has lost the customer.
This is not a hypothetical. It's the dominant failure mode of "high deflection rate" deployments we audit. The number on the dashboard is up. The actual customer outcome is down.
The metric that doesn't lie: resolution rate
Resolution rate: a ticket is resolved if the customer's underlying issue is actually fixed. Two practical ways to measure it:
- Repeat-contact within 14 days. Did the same customer come back about the same issue? If yes, the original ticket wasn't resolved — it was deferred. Subtract from the resolution count.
- Explicit confirmation. Ask the customer at the end of the AI conversation: "Did this solve your problem?" Yes/no. Count yes.
Neither method is perfect. Both are an order of magnitude more honest than deflection rate. We use both, weighted: explicit confirmation as the primary signal, repeat-contact as a backstop catching customers who didn't engage with the confirmation prompt.
Why vendors push deflection
Deflection rate is easy to measure (count the tickets that didn't escalate) and almost always looks high (any ticket that goes silent counts). Vendors write contracts and pricing pages around it because it produces large numbers in board decks.
Resolution rate is hard to measure (requires customer signal or longitudinal data) and often looks worse than deflection rate (because some "deflected" tickets weren't resolved). Vendors don't put it on dashboards because the number is smaller. The number is also more useful.
What "good" looks like
Across the AI customer-support engagements we've shipped (see the 80/20 playbook), typical numbers we see for a well-deployed system:
- Deflection rate (what the vendor reports): 55-75%
- Resolution rate (what we measure): 38-58%
The gap between those numbers — typically 15-20 points — is the customers the AI thought it handled but didn't. Some of that gap is "customer was satisfied but didn't reply" (real deflection). Some of it is "customer gave up" (false deflection). The fix for the false-deflection share is mostly better escalation triggers and clearer "did this help" prompts.
How to start measuring it tomorrow
Add two columns to your existing CS tooling:
- "Did the AI close this ticket?" True/false. You probably already have this.
- "Did the customer come back about the same thing within 14 days?" Computed daily by joining tickets on customer-id and tag/category similarity. One SQL query.
Resolution rate = (AI-closed tickets that DIDN'T return within 14 days) / (total AI-closed tickets). Show this number in your dashboards alongside deflection rate. Watch which one moves when you change the system.
The first time you measure it, the resolution rate will be lower than you expected. That's the point. Now you can fix what's actually broken.
The summary
Deflection counts whether the customer gave up. Resolution counts whether their problem got fixed. Optimise for resolution. The vendor's dashboard will keep showing deflection. Track resolution yourself, in your own tooling. The number will be smaller. The number will also be true.