The role of control and agency in AI
Continuing my short series on AI and behavioral change. I've looked at identity threat (how AI adoption can feel like an admission that hard-won expertise wasn't quite enough) and the transparency trap (where disclosure of AI use erodes the very trust it's meant to build).
This time: the question of control.
Research published in Management Science by Dietvorst, Simmons, and Massey shows that across multiple incentivized experiments, participants were considerably more likely to use an imperfect algorithm when they could modify its outputs, even when those modifications were restricted. In one study, allowing people to adjust forecasts by as little as 2 points (on a 100-point scale) significantly increased algorithm adoption. The preference for modifiable algorithms, the researchers found, was simply about wanting some form of control. What mattered was the psychological reality that they had agency over the process.
This connects to Self-Determination Theory, the influential framework developed by Deci and Ryan. SDT proposes that humans have three core psychological needs: autonomy, competence, and relatedness. When these needs are frustrated, motivation suffers. When they're supported, people engage more deeply and perform better. A major review in Nature applies the SDT lens to the future of work. The authors note that algorithmic management systems often decrease satisfaction of autonomy needs, precisely because they reduce employees' belief in being "agents of their own behaviour" rather than "pawns of external pressures." The motivational consequences are predictable: disengagement, resistance, quiet sabotage.
I've seen this pattern in my own organisational AI deployment. When tools are mandated without input, adoption is grudging at best. When employees have genuine choice over which tasks the AI assists with, how outputs are used, whether to follow recommendations, the same technology feels like augmentation rather than imposition.
What might this mean for organisations?
Design for controllability, not just capability. The technical question is "does this AI work?" The behavioral question is "does this AI preserve the user's sense of agency?" Both matter for sustained adoption.
Involve employees in implementation decisions. Which workflows benefit most from AI? Where should human judgment remain primary? These aren't technical questions, they're questions about professional identity and job design.
Forcing AI use may generate short-term compliance metrics while eroding the psychological conditions for long-term engagement. Voluntary adoption, supported by training and clear benefits, tends to stick.
AI is often sold as a tool to enhance human capability. But when deployed in ways that strip agency, it undermines the very motivation that makes capability matter.