PLAYBOOK

What "AI strategy" means when you're under $10M ARR

Most "AI strategy" content is written for Fortune 500 companies with budget for a deck and an offsite. If your business is under $10M ARR, that advice is the wrong shape. Here's what the term actually means at the scale you operate at — three concrete questions, no deck.

READ · 6 MIN UPDATED · 2026-04-12 BY · PINTOED AI STUDIO

The advice that doesn't fit

Open any McKinsey or BCG piece on AI strategy. You'll see mentions of "AI Centers of Excellence," "responsible-AI governance councils," "enterprise change management," "transformation portfolios." All of those make sense for an organization with 5,000 employees.

For a 12-person company, a 50-person company, even a 200-person company, that advice is a costume. You don't need a council. You don't need a transformation. You need to know what to ship next.

Below is what we actually mean by AI strategy when working with sub-$10M-ARR clients. Three questions, in order.

Question 1: Where does each repeated knowledge task live?

List the things people on your team do more than once a week that involve reading something, thinking, and producing output. Customer emails. Outbound personalization. Quote generation. Status reports. Code review on a particular subsystem. Onboarding new hires.

That list — usually 8–15 items long — is your AI opportunity surface. Not "transformation." Not "verticals to disrupt." Just "what is your team retyping, rereading, or rewriting this week that a model could draft?"

The strategy lives in this list. Most of the list will be small. Some items will be 30 minutes saved per person per week. That compounds. Add up the time across the team and you have a real number — both for prioritization and for the ROI conversation that comes later.

Question 2: Which of those tasks have a clear "right answer"?

The tasks where AI works first are the ones with a definable correct output. "Categorise this ticket" is one. "Draft a reply in our tone, citing the help docs" is another. "Summarise this Zoom transcript into the standard meeting-notes format" is another.

The tasks where AI works second — or never — are the ones where the right answer is contested. Strategic positioning. Hiring decisions. Anything that depends on judgement the team itself doesn't agree on.

Cross out everything in your list from question 1 that doesn't have a clear definition of correct output. The list shrinks. The remaining items are where you should start. The crossed-out items aren't bad — they just aren't where AI lands first.

Question 3: What's the smallest test that proves it's worth doing?

For each remaining item: design the smallest possible test. Not a pilot. Not a "POC." A test. One person, one workflow, two weeks, a yes-or-no answer at the end.

Examples:

Two-week tests with clear yes/no outcomes are the unit of AI strategy at sub-$10M ARR scale. They produce data, they produce shipped value, and they keep you out of the deck-and-roadmap trap that Fortune 500 advice will pull you into.

What you don't need

The honest version of AI strategy at this scale

If a peer asks you "what's your AI strategy?" the honest answer is: "I have a list of 12 places it might help, I'm running a two-week test on the top three this quarter, and I'll know what I'm doing in 60 days."

That's not a deck answer. It is the right answer. And it's the one that will produce more usable AI in 12 months than the company down the street with the strategy doc and the offsite.

For the discipline behind running those tests well, see our AI build checklist. For the anti-pattern of doing too much too soon, see When NOT to build with AI.

Want help running the first three two-week tests? That's most of what we do.

BOOK A CALL → SEE SERVICES →