Measuring Barriers to AI Adoption
Performance Measurement in successful AI adoption
The previous articles in this series explored how AI deployment can fail through threats to identity, competence, workload balance, and organisational friction. This week: the P in my ADOPT framework - Performance Measurement.
Most AI initiatives are evaluated on ROI: revenue gains, cost savings, efficiency improvements. McKinsey's 2025 survey of C-suite executives found that 36% report no change in revenue from gen AI, and only 23% see any favourable change in costs. These metrics matter, but they're lagging indicators. They tell you what happened, not why.
The issue is that AI's benefits are sometimes not visible. Small gains that don't accumulate into a memorable sense of impact. AI may speed up a task or reduce friction, but if the benefit requires little conscious thought to access, users don't mentally register it. They undervalue AI's cumulative contribution, and so does the organisation.
Goal-Gradient Theory explains why this matters. Clark Hull's research showed that effort increases as people approach a visible goal. The inverse is also true: invisible goals produce invisible effort. If the metrics your organisation tracks are disconnected from what employees actually experience, you've severed the feedback loop that drives behaviour.
Many organisations try to surface these invisible gains through self-report surveys: "How much time did the tool save you?" This sounds reasonable but research from the UK Cabinet Office found that people consistently overestimate time savings. Estimating requires anticipating how long a task would normally take, then subtracting how long it took with AI. Most people can't do this accurately. The result is measurement that feels rigorous but generates data employees don't trust.
So what does work?
Microsoft's analysis of 1,300 Copilot users offers a clue. Just 11 minutes of daily time savings is enough to act as a tipping point, the threshold where users perceive the tool as valuable. After 11 weeks of consistent use, the majority reported that Copilot had fundamentally improved their productivity. This "11-by-11 rule" suggests small wins can compound into habit formation but only if the measurement system makes them visible.
What might this mean for organisations?
1. Measure behaviour, not just outcomes. Adoption rate. Frequency of use. Trust scores. These are leading indicators and they tell you whether the conditions for ROI are being met.
2. Make metrics visible and personal. Employees need to see their own progress, not aggregate statistics announced at town halls.
3. Triangulate your measurement. The UK Cabinet Office recommends mixed methods: self-report combined with usage analytics combined with qualitative interviews.
All food for further thought.
This series concludes next week. What follows? A new way to diagnose where your AI adoption is really stuck and how to solve....
Successful AI deployment
The Psychology of AI
Deloitte’s Tech Trends 2026 report landed last month. They zero in on five forces reshaping the enterprise: AI going physical through robotics, the rise of agentic AI and a “silicon-based workforce,” the infrastructure reckoning as cloud-first strategies buckle under AI economics, the rebuilding of tech organisations around human-agent teams, and the cybersecurity paradox of AI as both threat and defence.
But one statistic jumped out: only 11% of organisations have successfully deployed AI agents in production.
Gartner predicts 40% of agentic AI projects will fail by 2027.
This is consistent with a pattern emerging across multiple other studies. McKinsey found 72% of organisations have deployed generative AI, but only 26% report measurable productivity gains and only 1% of executives describe their AI rollouts as “mature.” Google reports just 3% of organisations are “highly transformed.” Asana’s research shows 67% haven’t scaled AI beyond isolated experiments. MIT found that while 40% have piloted LLMs, only 5% have actually embedded them into workflows.
The usual suspects get blamed. Immature technology. Inadequate training. Lack of executive sponsorship. These factors matter.
But what if the real barriers are psychological?
What if we’re optimising for technical deployment while ignoring the behavioural substrate that determines whether anyone actually uses the thing?
As the UK Behavioural Insights Team puts it: “The promise of AI can only be fulfilled by understanding how and why people think and act the way they do.”
Deloitte’s report identifies the symptom of disappointing adoption, the gap between pilot and production. But it doesn’t provide the mechanism to diagnose why adoption stalls in a specific organisation. Is it motivation? People just don’t see genuine value. Is it capability? Confidence gaps prevent even the most willing adopters. Is it trust or identity threat? Professionals derive status from expertise. AI use can feel like an admission that twenty years of skill-building wasn’t quite enough.
The technology is here and organisations are rushing to deploy. What strike me as missing is the human side, and decades of behavioural scientific research point us to solutions. The organisations that close the pilot-to-production gap won’t be those with the best technology. They’ll be those that diagnose the human barriers first. Behavioral Science has a huge role to play here.
-
Deloitte Tech Trends 2026
Organisational friction and AI
Organisational resistance to AI
The previous articles in this series explored how AI deployment can fail through threats to identity, competence, and workload balance. This week: the O in my ADOPT framework: Organizational Friction. What happens when the design is sound, but the organisation still resists?
Two concepts from behavioral science explain this resistance: loss aversion and status quo bias. They're related but distinct, and addressing them requires different interventions.
Loss aversion, from Kahneman and Tversky's Prospect Theory, is the finding that losses loom larger than equivalent gains. Losing £100 feels twice as painful as gaining £100 feels good. This asymmetry shapes how people evaluate any change, including whether to adopt AI.
Status quo bias follows from this. Because change involves potential losses of routines, competencies, familiar workflows etc. people tend to stick with what they have. Loss aversion makes change feel risky. Status quo bias is the resulting inertia.
The UK Behavioural Insights Team's research illustrates both. In one experiment, participants preferred human help over AI, even when AI was more accurate. But when the task was reframed around avoiding losses rather than achieving gains, this preference vanished entirely. That's loss aversion: same choice, different frame, different behaviour.
Status quo bias operates more passively. In another BIT study, only 40% of participants messaged an available chatbot, even when it could help them. They didn't weigh the options and reject it. They simply kept doing what they were already doing. No decision required.
This is why my ADOPT framework treats Organizational Friction as a distinct diagnostic category. Loss aversion and status quo bias aren't design problems or awareness gaps, they're environmental forces. If management signals, incentives, and psychological safety work against change, even willing employees will struggle.
What might this mean for organisations?
For loss aversion: reframe AI as protection against errors or competitive disadvantage, not just a tool for gains. "Avoid falling behind" may motivate more than "get ahead."
For status quo bias: make AI the path of least resistance. Embed it in existing workflows rather than bolting it on as an extra step.
For both: create psychological safety. When managers protect employees who experiment and fail, the calculus shifts. Trying something new feels less risky. The status quo loses its gravitational pull.
Later in this series, I'll introduce the ADOPT diagnostic survey for measuring exactly where these forces are strongest.
-
Kahneman and Tversky's Prospect Theory
Solving the right problem, in the wrong way…….
Solving the right problems in the wrong way….
Continuing my AI deployment in the workplace series. In my previous post I explored what happens when AI solves for problems no one really has. This week: what happens when AI solves the right problems, but in the wrong way?
Research on cognitive offloading shows that humans naturally distribute mental effort across their environment. We use tools, notes, and routines to manage cognitive load. The "easy" tasks in a workday aren't just low-value activities to be eliminated. They provide rhythm, recovery, and a sense of accomplishment between harder work.
When AI automates these tasks, employees are left with more cognitively demanding work. The breaks disappear. The quick wins vanish. What remains are the complex items and decisions. Organisations celebrate the efficiency gains. But is there a longer-term cost?
Microsoft's 2025 Future of Work Report suggests the productivity promise of AI isn't materialising as expected. While 96% of C-suite leaders expect AI to boost productivity, 77% of employees say AI tools have actually added to their workload. 71% report greater feelings of burnout. The report identifies "workslop": AI-generated content that appears useful but lacks substance, forcing recipients to interpret, correct, or redo the work. This may explain why individual productivity gains aren't being fully seen at the organisational level.
This connects to decades of research on job design, too. Hackman and Oldham's Job Characteristics Model identifies skill variety and task identity as core drivers of work meaningfulness. When AI removes variety from the workday, leaving only complex, demanding tasks, it may inadvertently undermine the engagement that makes workers effective.
This is the second type of Design-Reality Mismatch in my ADOPT framework. AI deployment programs often see the automation of low-value tasks as purely beneficial. The reality is those tasks can sometimes serve a hidden psychological function. The mismatch is therefore about misunderstanding how work actually feels.
What this means for organisations:
1.Audit the full workday, not just individual tasks. Which tasks provide recovery? Which offer quick wins?
2.Preserve or replace the rhythm. If AI removes tasks that provided breaks, design alternatives.
3.Monitor wellbeing, not just productivity. If efficiency rises but wellbeing falls, the design needs revisiting.
Next up in the series: Why do people resist AI even when it's clearly better? The answer lies in what we fear losing, not what we stand to gain.
AI Design-Reality mismatch
What happens when organisations deploy AI to solve problems no one actually has?
Continuing my series on AI adoption. This post takes a deeper look at what I call Design-Reality Mismatch in my ADOPT framework: the gap between how AI is designed and how work actually happens. First: what happens when organisations deploy AI to solve problems no one actually has?
Behavioural research suggests this is common. Jonathan Haidt’s Social Intuitionist Model argues that reasoning often follows rather than precedes judgment. We decide first, then construct justifications afterward. In organisations, this manifests as solution-driven thinking: someone becomes excited about AI, and the business case is reverse-engineered to fit.
I’ve seen this pattern. A leadership team is inspired by generative AI. Within weeks, a raft of pilots are announced. The problem statement arrives later, vague, retrofitted. “Improving efficiency” becomes a placeholder for genuine friction. The technology is real. The justification is post-hoc.
The UK Government’s AI Playbook puts it directly: “You should also be open to the conclusion that, sometimes, AI is not the best solution for your problem.” This sounds obvious. In practice, it’s hard, especially when senior stakeholders have already announced the initiative.
This is the first type of Design-Reality Mismatch: the AI’s design assumes a problem exists, but the problem was invented to justify the solution. The mismatch isn’t technical, it’s foundational. You can’t fix adoption of a tool that solves the wrong problem.
When AI is deployed without genuine justification, employees notice. They’re asked to adopt tools that don’t address actual pain points. The gap between what leadership claims and what workers experience becomes a trust problem, not a technology problem.
This compounds the barriers I’ve discussed throughout this series. Professionals navigating identity threat, transparency penalties, and competence erosion are now asked to use AI that doesn’t solve a problem they recognise.
What might this mean for organisations?
Start with the friction, not the technology. Map actual workflow bottlenecks experienced by the people who will use the tool. If you can’t articulate a pain point in their language, you may not have a problem worth solving.
Define falsifiable success criteria before deployment. What would count as failure? If the answer is vague, demand specificity: reduced processing time by X%, improved accuracy on Y decisions.
Distinguish pilots from strategy. Exploratory experiments have value. But label them as such. Employees know the difference between problem-solving and technology tourism.
Up next in the series: The Hidden Cost of Efficiency or why solving the right problem the wrong way can still backfire.
-
UK Government AI Playbook
Trust…
Reflections on trust in the social sciences
The New Yorker recently ran an article on Oliver Sacks, who fabricated key details in his celebrated case studies. His books are incredibly powerful. He wrote with empathy and respect. But he privately called his own findings “lies” and “falsification” in his own journals.
Connect this to Diederik Stapel (50+ faked psychology papers), Dan Ariely and Francesca Gino (honesty researchers accused of data fraud), and a replication crisis showing only 1/3 of psychology studies hold up.
The common thread isn’t complicated: people basically lied.
Sacks knew he was giving patients “powers which they do not have.” Harvard found Gino “intentionally, knowingly, or recklessly committed research misconduct.” Stapel invented data for twelve years. Ariely denies responsibility for “directly” altering data.
These are moral failures. Full stop.
But there’s a second question: why did the lies work for so long?
The human sciences developed a seductive hybrid: research that was rigorous-seeming AND beautifully told. Sacks became “the scientist who wrote like a dream.” Behavioral scientists built empires and made huge amounts of money on TED Talk-ready findings.
The elegant prose, the compelling narratives, the camera-ready research….these weren’t neutral vehicles for fraud. They were what made it work. They created immunity. Who questions a story that beautiful? Who replicates research that already feels true?
Systems that should have caught liars instead rewarded them. Journals wanted novel findings, not boring replications. Universities wanted stars, not skeptics.
So we have two failures:
Moral: Individuals chose to deceive.
Structural: Institutions let deception flourish because good stories were more valued than verified ones.
Both matter. Blaming only structure lets liars off the hook. Blaming only individuals misses why this kept happening across fields and decades.
AI and professional identity
Continuing my series on AI adoption in the workplace. What happens to the feeling of professional mastery when the machine can do what you used to do?
Research published last year by Macnamara et al. argues that AI assistants may not only accelerate skill decay among experts, but may prevent those experts from recognising it. They call this the “illusion of competence”, a misleading sense of mastery fostered by AI’s fluency and convenience.
When AI routinely aids performance at a high level, even well-trained professionals may gradually lose the cognitive skills they once possessed. Worse, because AI-augmented performance looks indistinguishable from genuine expertise, the decay remains invisible until the system is unavailable or fails. High performance masks the limits on underlying capability.
Most AI discourse treats augmentation as obviously preferable, and for good reason. But the distinction is harder to maintain than it appears. Tasks that begin as augmentation can drift toward automation as users increasingly defer to AI recommendations. The human role becomes supervisory, then nominal.
I’ve seen this pattern in my own deployment experience. A tool introduced to “support” quietly becomes the default. Professionals who initially created and reviewed outputs carefully, sometimes start to operate with less reflection. The “augmentation” frame obscured what was actually happening: a gradual transfer of cognitive effort from human to machine.
This is what I call a Design-Reality Mismatch, a core failure mode in my ADOPT diagnostic framework. The AI was designed as an augmentation tool, but the reality of how it is used erodes the very competence it is meant to support. Protected Competence ensuring employees maintain their sense of professional mastery requires intentional design.
What might this mean for organisations?
Frame AI as augmentation, but design it that way. The label matters less than the architecture. Does the tool genuinely require human judgment to function? Or is the human role ceremonial?
Protect opportunities for independent practice and make clear when AI should not be used. The US Federal Aviation Administration recently recommended that pilots “periodically use their manual skills for the majority of flights” after evidence that automation support was eroding handling abilities. The same logic applies to knowledge work. Skills that aren’t exercised atrophy.
Make skill maintenance visible in performance metrics. If we only measure efficiency, we’ll optimise for it at the expense of capability. Organisations need to track whether professionals can still perform core tasks without AI assistance, and create protected time for them to continue to learn to do so.
-
Does using artificial intelligence assistance accelerate skill decay and hinder skill development without performers’ awareness?
The role of control and agency in AI
Continuing my short series on AI and behavioral change. I've looked at identity threat (how AI adoption can feel like an admission that hard-won expertise wasn't quite enough) and the transparency trap (where disclosure of AI use erodes the very trust it's meant to build).
This time: the question of control.
Research published in Management Science by Dietvorst, Simmons, and Massey shows that across multiple incentivized experiments, participants were considerably more likely to use an imperfect algorithm when they could modify its outputs, even when those modifications were restricted. In one study, allowing people to adjust forecasts by as little as 2 points (on a 100-point scale) significantly increased algorithm adoption. The preference for modifiable algorithms, the researchers found, was simply about wanting some form of control. What mattered was the psychological reality that they had agency over the process.
This connects to Self-Determination Theory, the influential framework developed by Deci and Ryan. SDT proposes that humans have three core psychological needs: autonomy, competence, and relatedness. When these needs are frustrated, motivation suffers. When they're supported, people engage more deeply and perform better. A major review in Nature applies the SDT lens to the future of work. The authors note that algorithmic management systems often decrease satisfaction of autonomy needs, precisely because they reduce employees' belief in being "agents of their own behaviour" rather than "pawns of external pressures." The motivational consequences are predictable: disengagement, resistance, quiet sabotage.
I've seen this pattern in my own organisational AI deployment. When tools are mandated without input, adoption is grudging at best. When employees have genuine choice over which tasks the AI assists with, how outputs are used, whether to follow recommendations, the same technology feels like augmentation rather than imposition.
What might this mean for organisations?
Design for controllability, not just capability. The technical question is "does this AI work?" The behavioral question is "does this AI preserve the user's sense of agency?" Both matter for sustained adoption.
Involve employees in implementation decisions. Which workflows benefit most from AI? Where should human judgment remain primary? These aren't technical questions, they're questions about professional identity and job design.
Forcing AI use may generate short-term compliance metrics while eroding the psychological conditions for long-term engagement. Voluntary adoption, supported by training and clear benefits, tends to stick.
AI is often sold as a tool to enhance human capability. But when deployed in ways that strip agency, it undermines the very motivation that makes capability matter.