Prioritisation · · 5 min read

From Magic Fairy Pixie Dust to Strategic Decisions, Part 2: Your AI Technology Decision Guide

Sticky note reading HOW TO

In Part 1, we covered the basics of adapting Cynefin for to have better conversations about prioritisation of using AI. In this follow-up, we add more information about when to use which approaches: demystifying AI from Magic Fairy Pixie Dust to the specific technologies that you might employ in a situation to get the best results.

Want more like this? Apply to join CPO Circles today.

Technology Mapping Table

Domain

Recommended Technologies

Why This Approach

Implementation Considerations

Examples

Clear

• RPA (Robotic Process Automation)

• Rule-based automation

• Simple API integrations

• Workflow automation tools

• Traditional scripting

• Process is fully understood

• Rules are explicit and stable

• Data is structured

• No judgment required

• Deterministic outcomes

• Focus on reliability over intelligence

• Prioritize maintainability

• Clear error handling

• Simple monitoring

• Low technical risk

• Auto-routing support tickets by keyword

• Scheduled report generation

• Data entry from structured forms

• Standard invoice processing

Complicated

• Classical ML (Random Forests, XGBoost)

• Fine-tuned LLMs (task-specific)

• Basic RAG (Retrieval-Augmented Generation)

• NLP for classification

• Computer vision models

• Decision trees with expert rules

• Expertise can be codified

• Patterns exist in data

• Outcomes are predictable within known parameters

• Requires specialist knowledge

• Cause-effect relationships are discoverable

• Include expert review/validation

• Build confidence scoring

• Create exception handling paths

• Plan for model retraining

• Document decision logic

• Human oversight for edge cases

• Credit risk assessment

• Medical diagnosis support

• Legal document review

• Technical troubleshooting guides

• Vehicle damage assessment

Complex

• LLMs with human-in-the-loop

• Advanced RAG (with context management)

• Agentic AI (constrained, with approval gates)

• Hybrid ML + rules systems

• Semantic search + human judgment

• Conversational AI with escalation

• Multiple valid approaches exist

• Context heavily influences outcomes

• Human judgment adds critical value

• Unstructured or incomplete data

• Dynamic interactions affect downstream decisions

• Design clear escalation paths

• Provide rich context to humans

• Log all AI suggestions (and the human evaluation of them) for learning

• A/B test different approaches

• Build feedback loops

• Enable human override always

• Customer negotiation support

• Content moderation

• Strategic research assistance

• Personalized healthcare plans

• Complex sales guidance

Chaotic (Internal Sensemaking)

• LLM analysis tools

• Quick data exploration prototypes

• Pattern detection algorithms

• Sentiment/theme clustering

• Analytics experimentation

• Internal research chatbots

• System behavior is unpredictable

• Rules cannot yet be established

• Need to discover what patterns exist

• Internal-facing, low external risk

• Learning mode to understand the problem

• Minimize investment

• Move fast—you're the user

• Expect to throw away prototypes

• Focus on discovering patterns

• Don't worry about polish

• Instrument everything

• Iterate based on insights

• Analyzing why a new feature confuses users

• Clustering support tickets to find themes

• Exploring customer segments with unclear needs

• Testing feasibility of novel AI approaches

• Understanding unexplained data patterns

Chaotic (Discovery / Experiments)

• Simple user-facing pilots

• Lightweight LLM prototypes (beta-labeled)

• No-code/low-code AI tools

• Heavily instrumented MVPs

• Limited-release experiments

• Opt-in exploratory features

• System behavior is unpredictable

• External-facing, higher risk

• Need to probe real user response

• Rapid change expected

• Learning what actually works in the wild

NOTE: This area is high risk, and should only be used when appropriate.


• Set clear expectations ("beta," "experimental")

• Transparent labeling with users

• Easy human escalation/fallback

• Limited rollout (opt-in, small %)

• Monitor obsessively

• Be ready to pull back quickly

• Gather explicit user feedback

• Feasibility testing and alignment with internal teams and partners

• Supervised or explicitly labelled user testing of concepts

• Testing novel interaction patterns

Technology Deep Dive

Clear Domain: Focus on Reliability

Primary Approach: RPA and Rule-Based Automation

  • Traditional workflow automation
  • If-then logic with known conditions
  • Structured data processing
  • API orchestration

When NOT to use AI: If rules are simple and stable, avoid the overhead of AI/ML. Traditional automation is cheaper, more reliable, and easier to maintain


Complicated Domain: Leverage Expertise

Primary Approaches:

  • Classical ML: For pattern recognition where labelled data exists (classification, regression, forecasting)
  • Fine-tuned LLMs: When language understanding is needed but domain-specific accuracy is critical
  • Basic RAG: To ground responses in verified knowledge bases without extensive customization

Key Success Factor: The ability to trace decisions back to specific inputs and rules. Build systems that explain their reasoning.


Complex Domain: Augment Human Judgment

Primary Approaches:

  • LLMs with Human-in-the-Loop: AI suggests, humans decide with full context
  • Advanced RAG: Sophisticated context retrieval to support nuanced decisions
  • Constrained Agentic AI: Agents that can take preliminary actions but escalate for approval

Key Success Factor: Design for human-AI collaboration. AI reduces cognitive load and provides options; humans apply judgment, ethics, and contextual wisdom


Chaotic Domain: Probe and Sense

Two Distinct Scenarios:

Internal Sensemaking:

  • Fast prototyping with LLMs: Quick experiments to understand the problem space
  • Pattern discovery tools: Analytics and clustering to find signal in noise
  • Research-grade AI: Using AI as your research assistant, not your product

Key Success Factor: Speed of learning. You're the user, so iterate freely without external risk.

Discovery/Experiments:

  • Minimal viable AI: Simple implementations to gather real-world data under controlled circumstances
  • Design sprints: Build and test concepts to generate alignment, understand viability, and test feasibility
  • Transparent pilots: Beta programs with explicit experimental labeling
  • Heavy instrumentation: Focus on measurement and learning, not optimization

Key Success Factor: Balance learning speed with user safety. Set expectations, provide fallbacks, monitor obsessively.

Simplified Decision Flow

Start with the Discovery Questions from Part 1

         ↓

1. Is the process stable and rule-based?

   YES → Clear → Use RPA

   NO → Continue

         ↓

2. Can experts codify their decision-making?

   YES → Complicated → Use ML/Fine-tuned LLMs + RAG

   NO → Continue

         ↓

3. Is human judgment essential at key points?

   YES → Complex → Use LLMs with Human-in-the-Loop + Advanced RAG

   NO → Continue

         ↓

4. Is the system poorly understood/rapidly changing?

   YES → Chaotic (Internal) → Use Pattern Discovery Tools

   OR → Chaotic (External) → Use for Discovery/Alignment purposes, or experiment (carefully) in a Transparent Beta with appropriate Safety Rails

Anti-Patterns to Avoid

Don't Use

When

Use Instead

Agentic AI

Process is Clear or well-mapped

RPA or rule-based automation

Heavy ML infrastructure

System is Chaotic

Quick LLM prototypes

Rule-based systems

Process is Complex with context-dependency

LLM + human-in-the-loop

Fully autonomous AI

Human judgment is legally/ethically required

Augmentation, not automation

Enterprise LLM deployment

Still in Chaotic discovery

Lightweight experimentation

Movement between domains

As you learn and stabilize processes, opportunities may shift domains:

  • Chaotic → Complex: After experimentation reveals patterns
  • Complex → Complicated: When you've documented decision rules
  • Complicated → Clear: After all exceptions are handled

Implication: Your technology choice should evolve as the domain shifts. What starts as agentic AI exploration may become fine-tuned ML, then eventually become simple RPA.

Technology maturity considerations

Technology

Maturity

Best For

Risk Level

RPA

Mature

Clear / Complicated

Low

Classical ML

Mature

Complicated

Medium

Fine-tuned LLMs

Maturing

Complicated

Medium

Basic RAG

Maturing

Complicated

Medium

LLMs (general)

Emerging

Complex/Chaotic

Medium-High

Advanced RAG

Emerging

Complex

Medium-High

Agentic AI

Early

Complex/Chaotic

High

Questions to Guide Technology Selection

  1. Stability: How often do the rules change? (Daily → Chaotic; Never → Clear)
  2. Expertise: Can the best person in your company explain every decision in the system? (Yes → Clear / Complicated; No → Complex;  No system in place to make decisions → Chaotic)
  3. Data: Is it structured and complete? (Yes → Clear/Complicated; No → Complex/Chaotic)
  4. Risk: What's the cost of a wrong decision? (High → Keep humans involved)
  5. Scale: How many decisions per day? (High volume + low complexity → Automate fully)

Read next