Measuring Barriers to AI Adoption


The previous articles in this series explored how AI deployment can fail through threats to identity, competence, workload balance, and organisational friction. This week: the P in my ADOPT framework - Performance Measurement.


Most AI initiatives are evaluated on ROI: revenue gains, cost savings, efficiency improvements. McKinsey's 2025 survey of C-suite executives found that 36% report no change in revenue from gen AI, and only 23% see any favourable change in costs. These metrics matter, but they're lagging indicators. They tell you what happened, not why.


The issue is that AI's benefits are sometimes not visible. Small gains that don't accumulate into a memorable sense of impact. AI may speed up a task or reduce friction, but if the benefit requires little conscious thought to access, users don't mentally register it. They undervalue AI's cumulative contribution, and so does the organisation.
Goal-Gradient Theory explains why this matters. Clark Hull's research showed that effort increases as people approach a visible goal. The inverse is also true: invisible goals produce invisible effort. If the metrics your organisation tracks are disconnected from what employees actually experience, you've severed the feedback loop that drives behaviour.


Many organisations try to surface these invisible gains through self-report surveys: "How much time did the tool save you?" This sounds reasonable but research from the UK Cabinet Office found that people consistently overestimate time savings. Estimating requires anticipating how long a task would normally take, then subtracting how long it took with AI. Most people can't do this accurately. The result is measurement that feels rigorous but generates data employees don't trust.


So what does work?

Microsoft's analysis of 1,300 Copilot users offers a clue. Just 11 minutes of daily time savings is enough to act as a tipping point, the threshold where users perceive the tool as valuable. After 11 weeks of consistent use, the majority reported that Copilot had fundamentally improved their productivity. This "11-by-11 rule" suggests small wins can compound into habit formation but only if the measurement system makes them visible.


What might this mean for organisations?


1. Measure behaviour, not just outcomes. Adoption rate. Frequency of use. Trust scores. These are leading indicators and they tell you whether the conditions for ROI are being met.


2. Make metrics visible and personal. Employees need to see their own progress, not aggregate statistics announced at town halls.


3. Triangulate your measurement. The UK Cabinet Office recommends mixed methods: self-report combined with usage analytics combined with qualitative interviews.


All food for further thought.


This series concludes next week. What follows? A new way to diagnose where your AI adoption is really stuck and how to solve....

Previous
Previous

Trust and AI….

Next
Next

Social Identity and AI