AI and professional identity


Continuing my series on AI adoption in the workplace. What happens to the feeling of professional mastery when the machine can do what you used to do?

Research published last year by Macnamara et al. argues that AI assistants may not only accelerate skill decay among experts, but may prevent those experts from recognising it. They call this the “illusion of competence”, a misleading sense of mastery fostered by AI’s fluency and convenience.

When AI routinely aids performance at a high level, even well-trained professionals may gradually lose the cognitive skills they once possessed. Worse, because AI-augmented performance looks indistinguishable from genuine expertise, the decay remains invisible until the system is unavailable or fails. High performance masks the limits on underlying capability.

Most AI discourse treats augmentation as obviously preferable, and for good reason. But the distinction is harder to maintain than it appears. Tasks that begin as augmentation can drift toward automation as users increasingly defer to AI recommendations. The human role becomes supervisory, then nominal.

I’ve seen this pattern in my own deployment experience. A tool introduced to “support” quietly becomes the default. Professionals who initially created and reviewed outputs carefully, sometimes start to operate with less reflection. The “augmentation” frame obscured what was actually happening: a gradual transfer of cognitive effort from human to machine.

This is what I call a Design-Reality Mismatch, a core failure mode in my ADOPT diagnostic framework. The AI was designed as an augmentation tool, but the reality of how it is used erodes the very competence it is meant to support. Protected Competence ensuring employees maintain their sense of professional mastery requires intentional design.

What might this mean for organisations?

Frame AI as augmentation, but design it that way. The label matters less than the architecture. Does the tool genuinely require human judgment to function? Or is the human role ceremonial?

Protect opportunities for independent practice and make clear when AI should not be used. The US Federal Aviation Administration recently recommended that pilots “periodically use their manual skills for the majority of flights” after evidence that automation support was eroding handling abilities. The same logic applies to knowledge work. Skills that aren’t exercised atrophy.

Make skill maintenance visible in performance metrics. If we only measure efficiency, we’ll optimise for it at the expense of capability. Organisations need to track whether professionals can still perform core tasks without AI assistance, and create protected time for them to continue to learn to do so.

  • Does using artificial intelligence assistance accelerate skill decay and hinder skill development without performers’ awareness?

Previous
Previous

Trust…

Next
Next

The role of control and agency in AI