Thinking Clearly in the Age of the Machine
A 2023 study conducted by Harvard Business School, Boston Consulting Group (1), and researchers from Wharton and MIT delivered a finding that deserves more attention than it received. When 758 BCG consultants were given access to GPT-4 for a set of realistic consulting tasks, those using AI completed 12.2 percent more tasks on average, 25 percent faster, and produced work rated 40 percent higher in quality. These are striking numbers. But the study's more interesting finding concerned what happened when AI was used for the wrong tasks, those falling outside what the researchers called the "jagged technological frontier." In those cases, consultants using AI were 19 percentage points less likely to produce correct solutions than those working without it.
The difference, it turned out, was not primarily about the technology. It was about how people thought.
I wrote recently about my eldest daughter's decision to study History at university, and about why liberal arts education, despite falling enrolments and departmental closures, may be precisely the right preparation for an AI-driven future. Isaiah Berlin's distinction between hedgehogs and foxes offered a useful lens: foxes, who know many things and draw on varied perspectives, tend to outperform hedgehogs in uncertain environments. Philip Tetlock's research on expert prediction confirms this empirically. John Kay puts it directly: foxes are more likely to be right, even if hedgehogs are more likely to be listened to.
But that argument was incomplete. The fox-like capacity for synthesis and judgment, the ability to evaluate conflicting sources, identify patterns across contexts, and ask the right questions, is necessary but not sufficient. Something else is required to translate these capabilities into effective AI collaboration. That something, I now think, is the discipline of clear thinking made visible through clear writing.
George Orwell's 1946 essay "Politics and the English Language" (2) is often cited for its six rules of good writing: never use a dying metaphor, never use a long word where a short one will do, cut words ruthlessly, prefer the active voice, avoid jargon, and break any rule rather than say something barbarous. These are useful prescriptions. But the essay's deeper argument is about the relationship between language and thought.
Orwell's central claim is that "the slovenliness of our language makes it easier for us to have foolish thoughts." The relationship is bidirectional: unclear writing both reflects and reinforces unclear thinking. Ready-made phrases, Orwell observed, "will construct your sentences for you, even think your thoughts for you, to a certain extent, and at need they will perform the important service of partially concealing your meaning even from yourself."
This is not merely a stylistic concern. Orwell was writing about political manipulation, about how vague language enables people to defend the indefensible. But his insight applies with unexpected force to human-AI collaboration. When we communicate with large language models, we are, in Ethan Mollick's phrase, "programming with words." The clarity of our input directly shapes the quality of the output.
The old computing aphorism "garbage in, garbage out" takes on new meaning in the age of generative AI. But the garbage is not data, it is thought.
The Harvard-BCG study introduced the concept of a "jagged technological frontier"—an irregular boundary separating tasks where AI excels from those where it struggles or actively misleads. The frontier is not intuitive. Karim Lakhani, one of the study's authors, noted that many participants "use it as an information search tool like Google. This is not Google."
The researchers observed two patterns among consultants who navigated this frontier successfully. Some acted as "Centaurs," dividing labour strategically between themselves and the AI, knowing when to delegate and when to retain control. Others became "Cyborgs," integrating their workflow continuously with the AI, maintaining constant interaction at a granular level. Both approaches required something that the less successful participants lacked: a clear understanding of what they were trying to accomplish and why.
This is where clear thinking becomes essential. The consultants who stumbled were not lacking in intelligence or domain expertise. They were lacking in the metacognitive awareness to recognise when AI's confident-sounding output was leading them astray. They trusted the machine precisely where scepticism was warranted, and remained sceptical where trust was appropriate.
Mollick, who contributed to the study, has argued consistently that effective AI use is "not about crafting the perfect prompt, but rather using it interactively." The key, he suggests, is experimentation, iteration, and ongoing conversation. But this interactive approach presupposes a user who knows what they want, can articulate it clearly, and can evaluate whether the AI's response advances their purpose. These are not technical skills. They are thinking skills—the capacity to break a problem into components, to specify criteria for success, to recognise relevant distinctions.
Orwell proposed that a scrupulous writer asks four questions of every sentence: What am I trying to say? What words will express it? What image or idiom will make it clearer? Is this image fresh enough to have an effect? He added two more: Could I put it more shortly? Have I said anything that is avoidably ugly?
These questions are useful for prompting AI precisely because they are useful for thinking.
The same insight appears in different forms across the literature on clear communication. William Strunk, in The Elements of Style, (3) offered an injunction that is almost algorithmically precise: "A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts." Clarity, in Strunk's formulation, is a design discipline, the elimination of everything that does not serve the purpose.
Barbara Minto's The Pyramid Principle (4) approaches the problem from a structural rather than stylistic angle. Her insight was that clear communication requires clear thinking about the logic of what you are presenting: start with the answer, group and summarise supporting ideas, order them logically. Minto developed this framework at McKinsey in the 1960s, specifically to help consultants structure their arguments before writing.
The irony is difficult to miss. The consulting industry that codified structured thinking is now discovering. via the Harvard-BCG study, that this discipline is exactly what is needed to work effectively with the technology disrupting it. The consultants who struggled with AI had not done the prior work that Minto's method demands: they had not built the pyramid before they started communicating.
There is something pointed about how people typically use AI badly. They approach it the way Minto warned against approaching any communication: bottom-up, associative, hoping the structure will emerge. They throw information at the model and expect it to divine their purpose. But large language models are, in a sense, the ultimate audience for Minto's method—they respond literally to what you give them, with no ability to intuit what you actually meant.
Her formulation translates directly to effective prompting: What outcome do I want? What are the key components of the problem? What is the logical relationship between them? The person who has answered these questions before engaging the AI is positioned to use it well. The person who has not is likely to produce plausible-sounding mediocrity.
Consider the difference between a vague request - "write me something about leadership"- and one that reflects structured thinking: "I need to explain to a sceptical board why our customer acquisition costs have risen 40 percent while lifetime value has remained flat. The audience is financially sophisticated but unfamiliar with our industry's dynamics. I have five minutes. The key tension I need to resolve is between short-term cost pressure and long-term brand investment."
The difference is not merely in the length or specificity of the prompt. It is in the quality of thinking that preceded it. The second request reflects someone who has already built the pyramid, clarifying the problem, identifying the audience, recognising the constraints, and surfacing the central tension. The AI can then be genuinely helpful, not as a replacement for thought but as an accelerant for thought that has already been done.
The Harvard-BCG study surfaced another finding worth noting. When researchers analysed the variation in ideas produced by consultants using AI, they found that while individual outputs were of higher quality, the collective set of outputs showed less diversity. The AI, it seems, nudged everyone toward similar solutions.
If everyone uses AI in the same way, and AI tends toward certain patterns of response, competitive advantage may shift toward those who think differently including those who choose, at times, to think without AI assistance. The study's authors speculated that "companies generating ideas without AI assistance may stand out" precisely because their outputs will be less homogenised.
Clear thinking is relevant here too. The person who has genuinely understood a problem—who has worked through its tensions and ambiguities—is better positioned to push back against AI's default responses, to ask for alternatives, to recognise when a superficially appealing answer misses something important. Orwell's advice to avoid "ready-made phrases" applies as much to AI-generated content as to human clichés. The danger is not that the machine will produce nonsense—it rarely does—but that it will produce plausible-sounding mediocrity that we accept without scrutiny.
Sebastian Raschka, a machine learning researcher, observed in his recent review of the year (5) in LLMs that "similar to coding, I do not see LLMs making technical writing obsolete. Writing a good technical book takes thousands of hours and deep familiarity with the subject." He notes that AI can help find errors, expand references, and reduce time on mundane tasks but "the core work still depends on human judgment and expertise."
This is consistent with the broader pattern emerging from early AI adoption studies. The technology is remarkably powerful at certain tasks and persistently unreliable at others. It can generate first drafts, but struggles to recognise when a draft fails to serve its purpose. It can produce syntactically correct prose, but cannot reliably distinguish between an argument that is logically valid and one that merely sounds convincing. It can offer many options, but cannot reliably identify which option is most appropriate for a particular context.
These gaps map onto the skills that liberal arts education has traditionally cultivated: the evaluation of evidence and argument, the recognition of context and audience, the capacity to synthesise disparate sources into coherent understanding. Philip Tetlock's foxes are valuable precisely because they possess these capabilities—they can draw on multiple frameworks, adjust their views in light of new evidence, and recognise the limits of their own knowledge.
What does this suggest for those navigating the current transition?
First, that the investment in learning to write clearly is not wasted by AI it is made more valuable. Clear writing is diagnostic of clear thinking, and clear thinking is the foundation of effective AI collaboration. The discipline of forcing yourself to say exactly what you mean, in the simplest terms possible, is precisely the discipline required to work productively with language models.
Second, that prompting is less about technique than about thought. The various frameworks for "prompt engineering" have their uses, but they are secondary to the prior work of understanding what you are trying to accomplish. The best prompt is one that reflects genuinely clear thinking about the problem at hand, its constraints, its audience, its success criteria, its tensions.
Third, that metacognitive awareness,knowing what you know and what you do not know, becomes increasingly valuable. The jagged frontier means that AI's capabilities are genuinely uncertain and constantly shifting. The capacity to recognise when you are in territory where AI is likely to mislead, and when you are in territory where it can be trusted, is itself a form of expertise.
Fourth, that deliberate practice in areas of human comparative advantage remains essential. The tendency to offload thinking to AI carries real risks of skill atrophy. Some researchers have begun noting that heavy AI users score lower on independent reasoning assessments. The antidote is not to avoid AI, but to remain "in the loop"—to use AI as a collaborator rather than a replacement, and to maintain the capacity for independent judgment through continued exercise.
Orwell's essay was written in 1946, as he was beginning work on Nineteen Eighty-Four. His concern was not merely stylistic it was political. He believed that unclear language enabled unclear thought, and that unclear thought made people vulnerable to manipulation. The capacity to think clearly was, for him, a form of resistance.
The stakes are different now, but not entirely. Large language models are not political propaganda, but they do present us with fluent, confident-sounding text that may or may not reflect genuine understanding. The capacity to distinguish between the twoto recognise when an argument is valid and when it merely sounds valid remains essential. This is what clear thinking provides: the ability to evaluate claims on their merits, rather than accepting them on the basis of surface plausibility.
My daughter is studying History. She is learning to evaluate conflicting sources, to construct narratives from incomplete evidence, to ask not just "what happened?" but "why?" and "what were the alternatives?" These skills will not be automated soon. More than that, they are precisely the skills required to work productively with automation to know when to trust it, when to question it, and when to override it.
The future, as I argued previously, belongs to those who can synthesise insights from multiple analyses, orchestrate multiple AI systems, and iterate rapidly across domains. But it belongs equally to those who can think clearly about what they are trying to accomplish and who can express that thinking in language precise enough to guide both human collaborators and artificial ones.
Orwell would have understood. Clear writing is not an ornament. It is a tool for thought, and a defence against foolishness including the new forms of foolishness that fluent machines may enable.