Prioritisation · · 12 min read

From Magic Pixie Dust to Strategic Decisions: How Product Leaders Can Navigate AI Requests

Cynefin model for making AI prioritisation decisions. Model described in article below.

As part of CPO Circles, I convene round tables every month with some of the smartest people I know, to discuss the things that they’re challenged by. We talk through these issues together, sharing experiences and advice - and when that fails, as it occasionally must, we commiserate.

I take notes from every one of these discussions, and put them behind a paywall. Today, I’m happy to share the framework that came out of a discussion with Antoine Fourmont-Fingal that started at one of the roundtables, and continued to develop. 

That word - framework - is important. This approach doesn’t tell you what to do in a prescriptive way - there’s an art and a science to that, and it’s highly context-specific. Instead, it gives you an approach to have better discussions about the opportunities that are under consideration, so that the ROI can be estimated and the work prioritised in a collaborative way.

Want more like this? Apply to join CPO Circles today.

Introduction

As Product leaders, we’ve always had the challenge of the CEO (or Sales lead) asking, “Hey can’t we just add this feature in quickly?” That leads to the dance of trying to evaluate if this is actually a good idea, if it is easy to do, and if it’s more important than everything else that the team is already working on.

The challenge has been compounded with the rapid adoption of AI tools that promise the ability to develop new functionality (and entirely new products!) just by coming up with the right prompt. 

And yes, the tooling is changing how we work - how quickly we can prototype, test, and even ship things. But what it doesn’t do is tell us what we should do, and why. There’s still a need for human judgement, and to work with multiple parts of the business to understand if this opportunity is: 

  • Something we now could easily do
  • Something which should be orders of magnitude cheaper
  • Something which should be orders of magnitude faster
  • Something which has become commodified
  • Something we shouldn’t be doing anymore as a business or as a product

However! If you have a proper framework for holding a prioritisation discussion with your partners, you have a chance. It helps to move things from the mindspace of AI as ‘Magic Pixie Dust’ that can do anything, to a sensible chat about where the opportunities really lie. Even better, there’s an existing approach for thinking about systems that works really well as a model for this discussion.

You might have come across Cynefin over the past few years - it’s a Welsh word, pronounced ‘Keh-nevin’, and the Curriculum for Wales defines it as 'the place where we feel we belong, where the people and landscape around us are familiar, and the sights and sounds are reassuringly recognisable.’ Dave Snowden’s been working with it since the turn of the century, and uses it as a way to aid decision-making, classifying things into 4 specific domains: 

  • Clear
  • Complicated
  • Complex
  • Chaotic

You can read more about Dave’s work at The Cynefin Company.

In our CPO Circles discussion, we were inspired by this framework and used it as a rough guide to prioritising product development work. While there are plenty of models for prioritisation out there that can be used - MoSCoW, ICE, RICE, etc - it became clear that the word ‘AI’ escalates the subjective nature of these tools. That is, people would underestimate the effort required to do the work, and overestimate the gains at a factor much higher than normally occurs. We needed a new approach, to ensure that we could have a better discussion not about whether to use AI tools, but to clarify which ideas were most likely to be successful, what the effort and reward might be, and what approach to take. 

In the end, it comes down to something annoyingly simple and that pops up again and again: the right tool for the job is the one that helps you have a more informed discussion, faster, so that better decisions can be made at pace. As we’ve tried this with our friends and colleagues, we’ve found that using the Cynefin model as a base does just that:

  • It forces people to be informed about how often the feature will be used, and the real value it creates
  • It deals with the levels of risk and complexity involved
  • It moves the discussion into the realm of tangible business decisions and tradeoffs, and away from Magic Pixie Dust and ‘Can’t we JUST do X?’
  • It helps the Product Development team to better understand why these things are being asked for, and for the Business operations teams to become invested in the development process - encouraging the teams to communicate and partner in seeking the most expedient approaches to achieving genuine business goals. 

One more thing: we’ve tested this approach against the idea of automating existing processes, or of creating new processes where automation makes them feasible and viable. There are other use cases for AI-based approaches, but these are where the majority of day-to-day opportunities come up, and where the lowest-hanging fruit still lies in most companies right now.

How to use this approach:

Discovery

First, we need to come to an agreement on how to approach the opportunity that’s being presented. That involves qualifying which quadrant is most applicable. To do this, we ask a series of questions. We’ll use the example of a car dealership to illustrate how to apply it, practically.

What is the opportunity? Provide a specific story where this AI solution helps, showing the before and after states.
Yes, this sounds basic - and there’s a good reason for that. This is the foundation of any decent user story, and the basis of any decent canvas. There are good reasons we keep using it.

Tips:

  • Push for concrete, scenario-driven answers, not vague hopes.
  • Use real process maps or customer journeys when possible.
  • “If it feels abstract, it probably is—push for detail.”

Example:
“A customer visits the dealership’s website at 10 PM with a question about trade-in value. Instead of waiting for office hours, the genAI chatbot answers instantly, collects details, and books an in-person assessment for the next day. Before: email left unanswered until morning. After: lead captured before competitors can.”

How stable is this process? Does it change regularly, or is it a stable & mature approach?
If you’re looking to create scale or find efficiencies, it’s critical that the process being addressed is one that is suitable - one that is stable enough that the proposed approach is both viable and feasible.

Tips: 

  • Distinguish internal process stability and external customer/market shifts.
  • Internal stability: Are dealer scripts/processes stable, or is staff always improvising?
  • External stability: Are customer needs/market context shifting quickly (e.g., new EV incentives)?
  • If changes are frequent, note what triggers them.

Example:
“Internally: The process for handling trade-ins is quite stable—same forms and checks for years. Externally: Customer preferences shift more with season and new car releases, but the core need for fair pricing hasn’t changed.”

How will we measure the success of this? What KPIs have improved, and can we trace the cause back to this work?
This won’t come as a surprise - any strategic decision must be based on both why the thing is being done and how you’ll collectively know it’s been successful. Skipping this risks a lack of alignment and means that the people actively working on the opportunity understand the scope, allowing them to work on suitable approaches and make critical prioritisation decisions.

Tips:

  • Probe for cause-and-effect clarity: Is there a direct line between the AI’s output and the business outcome?
  • Use clear metrics: “What numbers move, and why?”
  • Flag answers like “maybe, probably” as signals of complexity.

Example:
“If the chatbot reduces call wait times by 30%, we can directly measure drop in abandoned support calls. Harder: If sales go up, other campaigns or promotions may play a role, so we need to track those too before attributing change to AI alone.”

Do all stakeholders have a similar understanding of the AI goal and success criteria?
This might seem redundant, as it’s not materially different from the previous question. In our experience, it’s always good to go the extra mile and confirm this with key partners to ensure there’s not been any misunderstanding.

Tips: 

  • Ask each to explain in their words; surface gaps explicitly.
  • Run a “repeat the brief” exercise: ask each group for their one-sentence goal.
  • Misalignment = complexity or chaos.
  • Ideal: Everyone converges on a similar story.

Example:

 “CEO: We can save so much on staff! It's just a question of time before AI becomes super intelligent, we have to be in the wave and accept the massive changes - our competitors will do it, and we'll be left behind.”

Do we have anyone in-house with hands-on AI building or management experience? Do we have anyone in-house with real experience building or running AI solutions, or will this be new for everyone?
From the outside, AI tooling looks like magic. When someone asks, ‘Can’t we just do this?”, they’re missing the fact that just is the most dangerous word in product development - it ensures that the reality of how things are developed and delivered is papered-over. For this step, we like to use Liz Keogh’s estimate multiplier for estimating complexity
5x - Nobody in the world has ever done this before.
4x - Someone in the world did this, but not in our organization (and probably at a competitor).
3x - Someone in our company has done this, or we have access to expertise.
2x - Someone in our team knows how to do this.
1x - We all know how to do this.

Tips:

  •  Ask for details on actual AI work—implementation, tuning—not just strategy or buzz.
  • Ask for actual project names, not just “worked with AI.”
  • Check for depth: data labeling, tuning, troubleshooting—not just AI “strategy.”
  • If all expertise is external, note the risk of “learn-as-we-go.”

Example:
“We’ve contracted vendors for some basic ML recommender engines, but have never built, fine-tuned, or integrated genAI ourselves. The closest we have is our API team, who maintains the vendor interface.”

Do we have quality, recent, and accessible data to feed and train this AI? Is it structured or unstructured?
Your outputs are only ever as good as the information it’s based on - it’s impossible to make good decisions from bad data. It’s critical to understand what data you’re working with, and how easy or hard it will be to implement it.

Tips:

  • Ask for examples: “Show me the data fields.”
  • Probe freshness: “When was this last updated?”
  • Are there gaps or protected privacy data?
  • Data quality is a big topic - the data needs to be relevant, sufficient, and un-biased.

Example:
“Website chat logs and call transcripts are stored for just 90 days; stored in different formats, with no labeling for topic or customer satisfaction. Customer records are in another system, so linking will need new work. Sentiment analysis is going to be tricky, as sarcasm doesn’t get analysed well. And we may have trouble with handling multiple languages.”

Can we map the end-to-end flow the AI touches, including all human and system handoffs?
This is a critical step, and it’s the key to understanding if the work will enable straight-through processing or if it's enabling humans to make better decisions, faster. This is the fundamental difference between the Complicated and Complex domains - and discovery here also sometimes leads to an understanding that the opportunity is actually classified as Chaotic. We’ll talk more about what to do with this information in the next section.

Some versions of Cynefin add another domain, Disorder. Roughly, this equates to ‘we don’t know if we can know.’ For our purposes, we’ll lump this in with Chaotic as things that are not currently understood well enough to be mapped.

Tips: 

  • Use diagrams to expose gaps, silos, or fragile links.
  • Use diagramming and swimlanes—even whiteboard photos are fine.
  • Identify hand-offs between people, teams, and systems.
  • Gaps or ambiguities signal complexity.

Example:
“Customer question arrives via website chat: routed to genAI, question answered, details stored in CRM, triggers email to manager. But if the customer calls instead, that’s a separate process—handled by the phone team with no AI, data not linked.”

What domain does it belong in?

These questions give us the context needed to understand which domain the request lives in:

Clear
The process exists, it’s well understood, and very straightforward. The data is easily available in a structured format, and we can automate the rules without any significant issue, there is minimal risk of disagreement between humans over solutions, and there’s minimal risk associated with the approach.

Things in this quadrant are ripe for automation, and can usually be handled by logical processes, such as Robotic Process Automation (RPA).

Complicated
We understand the process, but it takes some doing - the rules are well understood, but it takes some time, effort, and expertise to implement them. That may be because the data is unstructured, not easily available, or the rules have some nuance to them. The risk and/or uncertainty here is non-trivial, but we believe that it’s possible to deal with it. The consequences of any decision made along the process can be mapped, traced back, and understood.

Things in this quadrant are the most obvious opportunities for efficiency - the biggest savings or advantages come from taking something that is cumbersome and making it faster and cheaper. The tooling available - be it RPA, ML, LLM, or something else - means that things which were out of reach in the past are now amazing opportunities. This area is where you'd typically need "experts" and where LLMs (theoretically! Your mileage may vary) can shine by dodging "entrained thinking" and paralysis analysis.

The critical part is to include the guardrails needed with regards to exceptions and unexpected consequences. 

Complex
We understand what the process is for - the results should be within a predictable range, but there are challenges:

  • How the results are obtained isn’t always easy to document into a clear set of rules,
  • Some decisions in the process have a dynamic impact on things further in the process, sometimes in unpredictable ways, 
  • Or we’re dealing with unstructured or incomplete data, which creates challenges for analysis,
  • Or human judgement is needed for other reasons - especially when we’re dealing with new processes or approaches to them.

In these cases, we can often create economies, but not automate end-to-end. The opportunities here come from finding the Complicated and Clear elements of the Complex situation, leaving humans in the loop with sufficient context to make decisions at specific points in the process.

Chaotic
These systems are poorly understood or seem unpredictable by their very nature. They’re often dynamic situations, evolving on a regular basis - a decision made today, given specific inputs, may not be the same one that would be made tomorrow.

Prioritisation within the quadrant

"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." - Dr Ian Malcolm (Jeff Goldblum) in Jurassic Park

Once an opportunity is placed in the right domain, there’s the small matter of prioritising it - is this something we need to do now, something that can be come back to at the right time, or is it not worth doing at all?

To do that, we should be asking questions like:
1. How often is the process run?
2. How much time/effort do we save with automation?
3. Are there mistakes being made that we will no longer have to deal with?
4. What is the amount of effort it will take to automate this?
5. Is there a reason we should just do it anyway?

How expensive is it to run the process, potentially?
This is an often-neglected question, with some real risks attached to it.
-AI services aren’t always the most economical way to solve a problem.Many -AI vendors are not currently profitable, and it’s difficult to predict where these costs will land in a few years.
-There’s a lot that can be done to optimise your usage of tokens. Moving from a proof of concept to a scalable, production model should include a strong understanding of the service’s financial viability.
What are the consequences of this going wrong?
Faith Forster introduced me to an approach a few years back, a 2x2 model mapping risk and certainty against each other. The benefit of this approach is that it encourages teams to break things down into small enough risk increments that - whenever possible - they can be tested in production. Anytime you can’t do that, it’s critical to ensure that the potential consequences and approaches to mitigation/remediation are discussed in advance. 

Tips: 

  • Consequence scanning and the Black Mirror test are good techniques to use for this
  • Knowing what can go wrong is important - but running through the outcomes with key partners and stakeholders is critical. 
  • You can’t plan for everything - but you can have discussions for some of the most likely scenarios.

Example:

This example is an outlier, of course - but that’s the point. It’s the outliers that will get you.

Car sales are a deliberately complex negotiation in most cases - both sides know that the advertised price is the starting point for a discussion. This can be broken down into multiple complicated processes around financing, optional upgrades, warranties, and it can - in theory! - each of which can be made more efficient with straight-thru processing. But without a human in the loop to sense-check and approve the final sale, things can go very wrong. 

“It’s possible that with the right kind of prompting, the chatbot might sell a $50,000 new car to a customer for US$1. This isn’t something we can just run some unit tests on. What will we do if this happens? Do we want to make it so that every deal gets reviewed and approved by a human?”

Is it worth it for us to do this work?
Lots of things feel important to the requestor, but they're not always worth the effort. There's plenty to say about this, but XKCD has already done the work:
Is It Worth the Time? https://xkcd.com/1205/ Don't forget the time you spend finding the chart to look up what you save. And the time spent reading this reminder about the time spent. And the time trying to figure out if either of those actually make sense. Remember, every second counts toward your life total, including these right now.
Is It Worth the Time? / https://xkcd.com/1205