7 Priorities Is Not a Strategy
I ran a strategy audit for a company where the CEO was convinced they had a strategy but the team couldn't follow it. The deck was about 10-12 slides. Three of...
Mar 31, 2026
A company I spoke with last quarter had their AI strategy ready. Slides, a dedicated section, a product vision that was going to be powered by AI. The founder had clearly thought hard about it.
When I asked what problem they were solving, he gave me three answers. When I asked which of the three the company was actually going to win at, he said they were going after all of them.
That's a strategy problem with AI features attached.
This conversation is happening a lot. Teams genuinely excited about what AI can do, adding it to their products, and calling the sum of those additions a strategy. Which makes sense on the surface. The building has never been cheaper, the prototyping never faster, the gap between an idea and something testable never shorter. Engineers are more productive than they've ever been. Designers too.
When building was slow and expensive, the limiting factor was capacity. You could only build so much, so the wrong bets were naturally constrained by time and headcount. When execution gets cheaper, the constraint shifts to having something worth building at all. More capacity, pointed at a fuzzy target, doesn't cost less. It costs more.
Here's the part that doesn't get said enough: product strategy is primarily a creative exercise. Most teams treat it as analytical: run the data, map the market, prioritise the opportunities. But the actual work, the diagnosis of what's really happening and what you're specifically going to do about it, is creative. It requires judgment you can't assemble from inputs alone. AI is extraordinary at the analytical part. It's the creative part that it can't replicate, which is also the part most strategies are missing.
The pattern I keep running into is that teams have done the analysis. They have data. They have a market map. They have a list of priorities that's really a list of things someone didn't want to say no to. What they haven't done is the harder and slower work of arriving at a specific point of view: what they believe, what they're willing to bet on, what they're explicitly not doing. That last question is usually the one nobody can answer.
A few months ago I asked a CPO to walk me through her company's strategy. She handed me a deck. Twelve slides, six strategic focus areas, a vision statement that could have applied to any company in the space. I asked which two of those six she'd cut if she had to genuinely pick two things to win at. She said they were all important. When I asked her team separately, I got five different answers.
They were adding AI to three of the six focus areas.
The AI work wasn't wrong, exactly. Some of it made sense on its own terms. But it was sitting on top of a strategy that hadn't done its creative work yet, which meant the team was accelerating in several directions at once. The building got faster. The confusion didn't.
Anyway. There's a version of this that bothers me more than the bad strategy itself, which is the assumption that the tools are the differentiation. Your competitors have Cursor. They have access to the same models. They can prototype just as quickly. So when execution capacity is equivalent, what tells you which company wins? It goes back to the quality of the thinking upstream of the build: who has a clearer read on what's actually happening, who understands the customer better, who has made the harder choices about what they're not doing. Those things have always been the job. AI just makes the gap between doing them well and not doing them at all slightly more consequential.
The customer understanding piece is worth sitting with for a second. I keep seeing teams use AI to summarise feedback. Run a week of support tickets through a model, get a clean themed output, move on. Fast and tidy. Also almost entirely useless for the thing you actually need to do, which is understand how customers actually think, not just what they said in aggregate. The insight that differentiates tends to come from reading verbatim. All 160 comments, one at a time. Not because it's efficient but because the summary abstracts away exactly the truth you're trying to find. The model tells you users find there's too much friction. You already knew that. What you didn't know is which kind of friction, in whose words, from which segment. You can't get there from the summary.
That is the slow, uncomfortable work that AI doesn't shortcut. It just makes it easier to skip, because something that looks like the output appears faster.
Rumelt's test hasn't moved. A real strategy has a clear diagnosis, a guiding policy, and coherent actions that reinforce each other. Specific enough that the person building the thing can explain it in thirty seconds without referring to a slide. If they can't, the strategy and the work have separated from each other, and the gap is growing, probably faster than before.
The hard thing was never the building. It was always knowing what to build and why, and being willing to say out loud what you were not doing. AI hasn't changed that. It's made the execution tools faster, raised the stakes on getting the thinking right, and made it easier to mistake one for the other.
That part is still entirely human.
Keep reading
I ran a strategy audit for a company where the CEO was convinced they had a strategy but the team couldn't follow it. The deck was about 10-12 slides. Three of...
A couple of years ago I sat in a strategy workshop where a facilitator spent about an hour walking product managers through where they sat in the market, opport...
An exciting strategy is something that people look forward to. The all-hands meetings, the inspiring slogans, the promise that this next big bet will change eve...
Best posts on product, strategy and AI. One email a month.