Social Identity and AI


Continuing my short series of articles on AI and change management. Interesting findings from researchers in Germany earlier this year found that when AI systems are deeply integrated into clinical workflows, physicians reported feeling their professional identity as threatened, even when they trust the technology itself.

We tend to assume resistance to AI is about the technology. The interface is clunky. The outputs are unreliable. The training was inadequate.

But what if the barrier is also about the identity of the people involved?

Social Identity Theory proposes that our self-concept is partly derived from the groups we belong to, and from the value we attach to that membership. Being a "senior analyst" or "experienced clinician" isn't just a job title. It's a source of pride, status, and self-worth.

Now imagine asking that person to use a tool that appears to do what they do at a mostly high degree of quality and in less time. Even if framed as "augmentation," the psychological reality may feel closer to replacement.

The ego threat literature explains what happens next. When high-status professionals perceive a threat to their self-image, they don't simply disengage, instead they actively self-handicap. The most confident experts paradoxically respond with behaviours that undermine their own performance.

I've witnessed this first hand in my own enterprise rollout. The most resistant users aren't the ones who struggle with the technology. The resistors are often the most skilled, the ones who have built reputations on the very capabilities the AI now performs. Psychologist Adam Grant argues that changing your mind requires grappling with your identity. No one enjoys being wrong, but we can learn to enjoy having been wrong because it means we're now less wrong than before.

What might this mean? Well, research on psychological spirals shows that initial encounters shape all subsequent interpretations. A poorly framed AI rollout might create a self-reinforcing pattern that compounds over time. Help professionals see themselves as more than the specific capability AI now performs, before the rollout begins. Frame AI adoption as a signal of continued (personal) growth and give people the agency to define what their expertise means in an AI-augmented context. We`ve found that local champions (respected colleagues who model AI integration) are effective. When a senior team member enthusiastically adopts AI, it signals that adoption is compatible with expert identity. Peer modelling addresses the signal directly. But framing alone isn't enough. If "freeing the experts for higher-value work" is the message, that work must actually exist and be genuinely valued. So think carefully about how we help professionals expand their sense of self, so that adopting better tools feels like growth rather than defeat.

  • Perceived Trust and Professional Identity Threat in AI-Based Clinical Decision Support Systems: Scenario-Based Experimental Study on AI Process Design Features

Previous
Previous

Measuring Barriers to AI Adoption

Next
Next

Successful AI deployment