AI Design-Reality mismatch
Continuing my series on AI adoption. This post takes a deeper look at what I call Design-Reality Mismatch in my ADOPT framework: the gap between how AI is designed and how work actually happens. First: what happens when organisations deploy AI to solve problems no one actually has?
Behavioural research suggests this is common. Jonathan Haidt’s Social Intuitionist Model argues that reasoning often follows rather than precedes judgment. We decide first, then construct justifications afterward. In organisations, this manifests as solution-driven thinking: someone becomes excited about AI, and the business case is reverse-engineered to fit.
I’ve seen this pattern. A leadership team is inspired by generative AI. Within weeks, a raft of pilots are announced. The problem statement arrives later, vague, retrofitted. “Improving efficiency” becomes a placeholder for genuine friction. The technology is real. The justification is post-hoc.
The UK Government’s AI Playbook puts it directly: “You should also be open to the conclusion that, sometimes, AI is not the best solution for your problem.” This sounds obvious. In practice, it’s hard, especially when senior stakeholders have already announced the initiative.
This is the first type of Design-Reality Mismatch: the AI’s design assumes a problem exists, but the problem was invented to justify the solution. The mismatch isn’t technical, it’s foundational. You can’t fix adoption of a tool that solves the wrong problem.
When AI is deployed without genuine justification, employees notice. They’re asked to adopt tools that don’t address actual pain points. The gap between what leadership claims and what workers experience becomes a trust problem, not a technology problem.
This compounds the barriers I’ve discussed throughout this series. Professionals navigating identity threat, transparency penalties, and competence erosion are now asked to use AI that doesn’t solve a problem they recognise.
What might this mean for organisations?
Start with the friction, not the technology. Map actual workflow bottlenecks experienced by the people who will use the tool. If you can’t articulate a pain point in their language, you may not have a problem worth solving.
Define falsifiable success criteria before deployment. What would count as failure? If the answer is vague, demand specificity: reduced processing time by X%, improved accuracy on Y decisions.
Distinguish pilots from strategy. Exploratory experiments have value. But label them as such. Employees know the difference between problem-solving and technology tourism.
Up next in the series: The Hidden Cost of Efficiency or why solving the right problem the wrong way can still backfire.
-
UK Government AI Playbook