The transparency Dilemma: identity erosion in AI


Continuing my short series on AI and change management. My first article introduced ADOPT, a behavioral framework for diagnosing why technically sound AI fails to get used and how we can solve for this. More on this to come. We then looked at identity threat: how AI adoption can feel like an admission that hard-won expertise wasn't quite enough, and why your most skilled people may be the most resistant.

This week: what happens when professionals try to be honest about using AI? Research published earlier this year found that across thirteen experiments involving over 5,000 participants, people who disclose their AI usage are consistently trusted less than those who don't.

The researchers tested this across diverse contexts: job applicants who mentioned using AI on their CVs, professors who disclosed AI-assisted grading, managers who admitted AI helped write performance reviews. The pattern held everywhere: disclosure eroded trust. Using AI, it seems, still reads to many as a kind of rule-breaking, or at least as cognitive offloading. The researchers also found that being exposed by a third party, having your AI use revealed by someone else, is even more damaging than self-disclosure. So you're penalised for honesty, but penalised more for secrecy that gets uncovered. Framing the disclosure ("I just used it for proofreading"), emphasising human oversight ("I reviewed everything carefully"), or positioning it as best practice ("I'm disclosing in the spirit of transparency") did not prevent the trust penalty.

This goes beyond what researchers call "algorithm aversion"(the tendency to distrust algorithmic judgment even when it outperforms humans) (see link in comments).

We don't just distrust the machine. We distrust people who admit to using it.

I've seen this play out in my own AI deployment. People encouraged to be open about AI use find themselves hesitating, not because they're hiding anything, but because disclosure feels like confession.

This connects directly to identity threat. Professionals already questioning if "using AI diminishes my expertise" now face another question: will others see me as less credible for admitting it?

What might this mean?

Mandating transparency without shifting norms around AI use creates pressure that falls hardest on early adopters and honest practitioners. Leadership modelling matters. If senior people openly acknowledge appropriate AI use, and are celebrated rather than diminished for it, the legitimacy calculus starts to shift. Norms change when high-status actors change them first.

Most importantly, we need to reframe what AI use signals. The target interpretation should be professional augmentation,"you're working smarter!" This happens through stories, visible examples, and repeated demonstration that AI-assisted work can be excellent quality.

Until organisational culture catches up, those who openly disclose AI use appear to be paying a price for their honesty.

Previous
Previous

The role of control and agency in AI

Next
Next

ADOPT: an approach to thinking about AI use in the workplace….