ADOPT: an approach to thinking about AI use in the workplace….
I've written before about the psychology of AI adoption. About psychological safety, Self-Determination Theory, the watch-outs from my own enterprise rollouts. But those were scattered observations. In a series of upcoming articles I`ll pull some of this together.
The MIT Media Lab recently reported that 95% of organisations achieve zero return from their AI initiatives. McKinsey estimate only 1% of executives describe their rollouts as "mature." BCG continue to report dismal success rates for large-scale change programs. The usual suspects get blamed. Poor data quality. Immature technology. Inadequate training. Lack of executive sponsorship.
But.
What if the real barriers are psychological? What if we're optimising for technical deployment while ignoring the behavioral layer that determines whether anyone actually uses the thing?
This is what I'll be exploring over the coming weeks. Not the technical failures. The human ones. The reasons why perfectly good AI tools sit unused, why adoption stalls, why "change management" so often falls flat.
The research points to a set of recurring barriers:
Identity threat. Professionals derive status and self-worth from expertise. AI can feel like an admission that twenty years of skill-building wasn't quite enough.
The transparency dilemma. Disclosing AI use can undermine perceptions of competence. There's a credibility tax on honesty.
Loss aversion. The psychological weight of potential losses outweighs equivalent gains. Change feels like giving something up.
Workflow friction. Small barriers compound. If AI isn't on the path of least resistance, it won't get used.
Measurement blindness. We track ROI and efficiency but ignore adoption rates, trust calibration, and identity conflict.
Meaningless oversight. Human-in-the-loop becomes theatre when people lack the time, expertise, or authority to genuinely intervene.
Each of these has robust research behind it. Each manifests differently across organisations. And each requires a different intervention.
The destination for this series is ADOPT, a behavioral framework I'm developing for diagnosing and future-proofing AI deployments. Each component of the framework captures a cluster of psychological barriers that cause technically sound AI to fail behaviorally. The ADOPT framework and diagnsotic tool is intended to help identify which barriers are blocking adoption in a specific context, and what to do about them.
I'll draw, amongst others, on Social Identity Theory, Cognitive Load Theory, Self-Determination Theory and the emerging literature on Professional Identity threat. I'll also connect to my own experiences trying to build and deploy a global AI solution. And I'll try to be honest about what we don't yet know. If you've been following some of my posts on AI, this series is where the threads converge. Watch this space.