AI Transformation and the McNamara fallacy

I`m part way through re-watching the compelling, PBS documentary of The Vietnam War. Parts of the series are relentless, almost punishing, and still somehow more relevant than ever. July to December 1967 was the period when American troop levels peaked at half a million and confidence in victory reached its zenith. Secretary of Defense Robert McNamara presented charts showing progress on every metric. The "net body count" demonstrated clear success. Every computer printout pointed in the same direction. Within eighteen months, the Tet Offensive would shatter public confidence, McNamara would resign, and the most data-driven military campaign in history would stumble toward strategic defeat.

One line from the documentary has stayed with me. A Vietnam veteran, reflecting on the body count strategy, observed: "If you can't count what's important, you make what you can count important."

That line could serve as the epitaph for McNamara's statistical approach to war. It could also serve as a warning for how governments are now approaching AI transformation.

The Fallacy in Four Steps

Daniel Yankelovich (the social scientist whose research on American public opinion spanned five decades) later codified what went wrong into a four-step progression. First, measure whatever can be easily measured. Second, disregard that which cannot easily be measured or given quantitative value. Third, presume that what cannot be measured easily is not important. Fourth, say that what cannot be easily measured does not exist.

"This is suicide," Yankelovich concluded.

He called it the McNamara Fallacy.

McNamara's approach was not stupid. It was, in many ways, ahead of its time. Before joining the Kennedy administration, he had been one of the "Whiz Kids" who revolutionised Ford Motor Company through statistical process control. At Ford, measuring defect rates and optimising assembly lines had produced spectacular results. The logic seemed transferable: if you can measure success, you can manage it.

The problem was that war is not an assembly line. The metrics McNamara chose (body counts, kill ratios, sorties flown, bombs dropped) were all precisely measurable. They were also disconnected from strategic objectives. When CIA analyst Desmond FitzGerald told McNamara the statistics were "meaningless, it just didn't smell right," McNamara stopped inviting him to briefings. When Brigadier General Edward Lansdale suggested adding an "x-factor" to represent the feelings of ordinary Vietnamese people, McNamara wrote it down, asked what it was, then erased it. He could not measure it.

David Halberstam (the Pulitzer Prize-winning journalist whose book The Best and the Brightest documented the war's architects) described watching a colonel brief McNamara at Danang in 1965. The colonel began with terrain, troop positions, strategic context. McNamara kept interrupting, asking for numbers. The colonel pivoted entirely to statistics: percentages, ratios, projections. It was, Halberstam wrote, "so blatant a performance that it was like a satire." McNamara was looking for the war to fit his criteria, his definitions.

By 1977, when researchers surveyed American generals who had served in Vietnam, only 2% considered body count a valid measure of progress. Their assessments: "A fake, totally worthless." "Often blatant lies." "Grossly exaggerated by many units primarily because of the incredible interest shown by people like McNamara."

The metrics had become the mission. The mission had been lost.

The New Metrics

In January 2025, the UK government published its AI Opportunities Action Plan. The document projected £47 billion in economic benefits. It announced £3.25 billion for a Transformation Fund, £1.75 billion for computing infrastructure, 213 applications for AI Growth Zones, 5,000 jobs in South Wales alone. Technology Secretary Peter Kyle's framing was explicit: Britain would "move fast, fix things" and proceed "at whiplash speed" to "shape the AI revolution rather than wait to see how it shapes us."

A year later, in January 2026, US Secretary of Defense Pete Hegseth announced a new AI Acceleration Strategy at SpaceX's Starbase facility in Texas. The strategy established a "Barrier Removal SWAT Team" with authority to waive non-statutory requirements, mandated that all service secretaries submit data asset catalogs within 30 days, and declared that "data hoarding is now a national security risk." Hegseth's framing was equally explicit: "Speed wins; speed dominates." The goal was an "AI-first warfighting force" operating on a "wartime delivery model."

Two major governments, a year apart, independently arriving at strikingly similar approaches. Both strategies share a common feature. They measure what can be easily measured: pounds invested, compute capacity, jobs created, barriers removed, zones approved, pilots launched. They will probably provide dashboards of quantifiable progress and present charts that point in encouraging directions.

They are McNamara's body counts in new form.

What the Research Actually Shows

The evidence on digital and AI transformation is beginning to come together. Seventy percent of digital transformation initiatives fail to meet their objectives (Gartner, 2025). Ninety-five percent of enterprise AI pilots fail to pay off (MIT). The abandonment rate for AI initiatives spiked 147% in 2025, with 42% of companies scrapping most projects, up from 17% the year before. Only 5% of enterprise-grade AI systems reach production.

Why such dismal returns?

The research points consistently in one direction: people, not technology. A 2025 study in the International Journal of Human-Computer Interaction found that "leading AI adoption is not a simple engineering exercise but rather represents a behavioral exercise where change management principles" are essential. Resistance to change emerged as the top barrier to scaling generative AI in MIT's NANDA report. McKinsey tested 25 attributes and found that workflow redesign (fundamentally changing how work gets done to integrate AI) was the single biggest predictor of success. Only 21% of organisations do it.

The Boston Consulting Group's research produced perhaps the most telling finding: what they call the 10-20-70 Rule. Seventy percent of AI value comes from changing how people work, not from algorithms. Twenty percent comes from data and technology infrastructure. Only 10% comes from the algorithms themselves. Yet organisations spend the vast majority of resources optimising that 10%.

This is Yankelovich's four steps playing out in real time. Efficiency metrics, daily users, hours logged, number of agents deployed are easy to measure. Therefore, they receive attention. Cultural change and behavioural adoption are presumed unimportant. And strategies proceed as if the harder-to-measure factors do not exist.

The UK government's AI Playbook, published in February 2025, acknowledges the people problem. It includes 10 principles and training courses on Civil Service Learning. But the investment ratios tell a different story: £3.25 billion for the Transformation Fund, £1.75 billion for computing infrastructure, no comparable figure for behavior change management or adoption deployment success. The metrics that will determine "success" are infrastructure metrics. The measures that would determine actual value creation are not being covered.

The Leadership Paradox

McKinsey's 2025 research revealed a striking asymmetry in how organisations understand their own AI challenges. Employees are three times more likely to use AI than leaders expect. But leaders are 2.4 times more likely to blame employees for adoption failures than to acknowledge leadership shortfalls.

This pattern illuminates how the McNamara Fallacy operates psychologically. Leaders see the inputs they control: investment approvals, technology procurement, infrastructure buildout. These feel like action. These produce metrics. When adoption fails, the natural assumption is that the recipients of this largesse have failed to appreciate it, not that the strategy itself measured the wrong things.

The parallel to Vietnam is uncomfortable. McNamara saw the inputs he controlled: troops deployed, sorties authorised, munitions expended. These felt like action. These produced metrics. When the war went badly, the assumption was that the execution had failed, not that the metrics themselves were meaningless. More troops. More bombs. More body counts. The logic was self-reinforcing and self-blinding.

A problem of psychology, not technology

The problem is fundamentally psychological, not technical. If people are not adopting AI, the obstacle is rarely that they lack access or training. It is that something about adoption feels threatening, uncomfortable, or wrong. Until that something is understood and addressed, no amount of infrastructure investment will change behaviour.

Listen differently. When McNamara dismissed Lansdale's "x-factor" for the feelings of ordinary Vietnamese people, he was not merely ignoring soft data. He was refusing to engage with the dimension of the problem that would determine success or failure. The equivalent today is listening for what people do not say about AI adoption: the unspoken concerns about competence, status, job security, and professional identity that shape behaviour far more than any formal policy.

Match interventions to actual barriers. If the barrier is motivation (people don't see why AI matters) then communication and leadership messaging may help. If the barrier is capability (people don't feel able to use AI effectively) then training and support may help. But if the barrier is psychological cost (people feel that AI threatens their professional identity, their autonomy, or their relationships with colleagues) then training is not merely ineffective. It is beside the point.

The £3.25 billion Transformation Fund and the Barrier Removal SWAT Team assume that the barriers are infrastructure and regulation. The research suggests the barriers are more often internal: the sense of loss people feel when asked to adopt a technology that may make them feel less expert, less autonomous, less essential.

Addressing these barriers requires different interventions: reframing AI as augmentation rather than replacement, preserving meaningful human judgment in workflows, creating psychological safety to experiment without career risk, and perhaps most importantly, involving people in designing how AI integrates with their work rather than imposing integration upon them.

Accept that some things cannot be counted. This is the hardest part. McNamara's genius was quantification; it was also his blindness. The temptation to respond to the McNamara Fallacy with better metrics is almost irresistible. If the current metrics are wrong, surely the answer is right metrics.

But some of what matters most in AI adoption cannot be reduced to numbers. The threat a senior professional feels when AI performs a task that took her twenty years to master. The unspoken worry that transparency about AI use will undermine credibility with clients. The feeling that something essential is being lost even as something useful is gained.

These experiences are real. They shape behaviour. They determine whether the £47 billion projection becomes reality or remains a number on a slide. And they cannot be captured by any metric a government might reasonably collect.

What they require is not measurement but recognition. Strategies that acknowledge the psychological dimension of adoption, even when they cannot quantify it. Leaders who understand that resistance often reflects legitimate concerns rather than mere reluctance. Programmes designed with awareness that asking people to adopt AI is asking them to change not just how they work, but how they think about themselves as professionals.

The Self-Licking Ice Cream Cone

In 2019, the Washington Post published an investigation into the Afghanistan war based on interviews with officials involved in the conflict. Colonel Bob Crowley, who served as a senior counterinsurgency advisor, described the reporting culture: "Every data point was altered to present the best picture possible. Surveys, for instance, were totally unreliable but reinforced that everything we were doing was right and we became a self-licking ice cream cone."

The phrase captures the endpoint of the McNamara Fallacy. Metrics designed to track progress become metrics designed to demonstrate progress. The distinction is subtle but fatal. When the purpose of measurement shifts from understanding reality to justifying investment, the measurement system becomes self-referential. It tells you what you want to hear. It stops telling you what you need to know.

The UK and US AI strategies are not yet self-licking ice cream cones. But the dynamics that produce them are visible. The £47 billion projection. The 5,000 jobs. The 213 zone applications. The "whiplash speed." These are metrics that will be cited to demonstrate success. They are also metrics that will demonstrate success regardless of whether actual value is created, because actual value creation is not what they measure.

What McNamara Taught

Yankelovich's four steps end with presuming that what cannot be measured does not exist. The antidote is not to measure more, but to act as if the unmeasured exists, because it does.

Robert McNamara lived until 2009. In later years, he reflected on Vietnam with remarkable candour. In the documentary The Fog of War, he listed eleven lessons he had learned. Lesson five applies here: "Proportionality should be a guideline in war." The body count strategy was disproportionate in a specific sense. It measured one thing (enemy deaths) with great precision while ignoring everything else that determined strategic success. It was not that measurement was wrong. It was that measurement of one variable, to the exclusion of others, created the illusion of control while destroying the reality.

The same lesson applies to AI transformation. Measuring infrastructure investment is not wrong. Measuring compute capacity is not wrong. Measuring jobs created is not wrong. What is wrong is measuring these things while ignoring adoption rates, cultural readiness, and organisational capability, and then treating the metrics you have as if they are the metrics that matter.

"If you can't count what's important," the veteran in Burns' documentary observed, "you make what you can count important."

This is the temptation. It is also the trap.

Governments can build AI Growth Zones. They can fund exascale supercomputers. They can establish barrier removal SWAT teams and mandate data catalogs. All of these will produce metrics. All of these will produce charts that point in encouraging directions. None of these will address why 70-95% of transformations fail, because the reasons for failure lie in factors that the strategies do not measure and therefore, by Yankelovich's logic, presume do not exist.

The Vietnamese farmers whose determination McNamara erased from his notepad did not stop existing because he could not quantify them. The cultural resistance and adoption barriers that determine AI success will not stop mattering because organisations prefer to count other things.


Previous
Previous

The Mycorrhizal organisation (part 1)

Next
Next

Tacit knowledge and Generative AI