The pattern matching trap

In 1942, psychologist Abraham Luchins presented hundreds of participants with a simple puzzle. Given three water jars of different sizes, figure out how to measure a specific amount of water. The first few problems all had the same solution: fill jar B, pour out enough to fill jar A once, then pour out enough to fill jar C twice. B minus A minus 2C. It worked every time.

Then Luchins changed the game. He presented problems where a much simpler solution existed, such as A plus C, or A minus C. The participants who had practiced the complex method overwhelmingly missed the simple one. They kept applying B minus A minus 2C, even when it was unnecessarily convoluted. Meanwhile, a control group with no prior practice solved the simpler problems instantly.

The finding was counterintuitive. Experience, which should help, actually hurt. The participants weren't stupid. They weren't careless. They had simply learned a pattern that worked, and that learning had made them worse at seeing alternatives.

Luchins called this the Einstellung effect, from the German word for "attitude" or "setting." It describes the development of what he called a "mechanized state of mind": a predisposition to solve problems in a specific manner even though better methods exist. His warning, issued over eighty years ago, has only grown more relevant: "When, instead of the individual mastering the habit, the habit masters the individual, then mechanization is indeed a dangerous thing."

The Efficiency Trap

Why do our brains do this? The answer lies in three interlocking mechanisms that cognitive scientists have documented extensively. Each serves a useful purpose. Together, they create a trap.

The first mechanism is processing fluency, the subjective ease with which we process information. Research by psychologist Rolf Reber and colleagues has shown that fluency affects everything from perceived truth to aesthetic preference. Statements written in clearer fonts are rated as more likely to be true. Names that are easier to pronounce are judged as more trustworthy. The brain uses ease of processing as a proxy for familiarity, and familiarity as a proxy for safety and truth.

This is usually adaptive. In a stable environment, things you've encountered before are generally safer than things you haven't. The problem emerges when genuinely new things need to be evaluated. Novelty, by definition, is disfluent. It requires more cognitive work. And the brain, ever efficient, tends to avoid that work when it can.

The second mechanism is the Einstellung effect itself: the tendency for prior successful solutions to block exploration of alternatives. Luchins demonstrated this with water jars. Later researchers extended the finding to chess, where even grandmasters miss shorter solutions when a familiar pattern is available. Merim Bilalić, Peter McLeod, and Fernand Gobet (researchers at Oxford and Brunel universities who study expertise and problem-solving) tracked expert chess players' eye movements as they solved problems with both a familiar solution and a shorter, less familiar one. The experts reported looking for better solutions. But their eyes kept returning to features related to the solution they already knew. The first idea didn't just influence their thinking; it controlled where they looked.

The third mechanism is functional fixedness, first described by Gestalt psychologist Karl Duncker in 1945. In his famous candle problem, participants were given a candle, a box of thumbtacks, and matches, and asked to attach the candle to the wall so wax wouldn't drip on the table. Most failed. They tried to tack the candle directly to the wall, or melt it and stick it on. The solution (empty the thumbtack box, tack it to the wall as a shelf, and place the candle inside) required seeing the box as something other than a container for tacks.

When Duncker presented the tacks separately from the box, success rates doubled. A small change in presentation broke the functional fixedness. The box, no longer serving its "obvious" function, became available for creative use.

These three mechanisms (fluency bias, Einstellung, and functional fixedness) aren't bugs in human cognition. They're features. Pattern-matching saves enormous cognitive resources. If you had to evaluate every situation from first principles, you'd never get through the day. The problem is that these efficiency mechanisms don't know when to turn off. They apply the same logic to situations where novelty genuinely matters, and in those situations, they become traps.

The Buzzword Tell

This framework illuminates a phenomenon that venture capitalist Peter Thiel (the PayPal co-founder, first outside investor in Facebook, and author whose book Zero to One shaped Silicon Valley's approach to innovation) has observed in startup pitches.

When entrepreneurs describe their companies using familiar category language ("We're building an AI-powered platform for big data analytics in the cloud"), Thiel calls it a "buzzword tell." The buzzwords reveal something important, he argues. Not about the technology, but about the thinking. "All these buzzwords are a tell, like in poker, that the company is bluffing and undifferentiated."

Thiel's observation is usually interpreted as advice about marketing or positioning. But the behavioural science suggests something deeper is happening. Buzzwords are cognitively fluent. They process easily because they map onto existing mental categories. When an investor hears "search engine," their brain retrieves a prototype, probably AltaVista or Yahoo, circa 1998. The category label feels right precisely because it's familiar.

This is the Einstellung effect applied to evaluation rather than problem-solving. Investors who have seen hundreds of startups develop pattern-recognition for what "successful" looks like. That pattern-recognition is valuable: it allows quick filtering of obviously bad ideas. But it also creates blind spots for things that don't fit existing categories.

Google, in 1998, would have been described as a search engine. There were already more than twenty search engines. The category label was accurate. It was also, in Thiel's framing, almost completely unhelpful, because it obscured the only thing that actually mattered: PageRank, the algorithm that made Google fundamentally different from everything that came before.

"Even the people who are running these companies will describe them in terms of existing categories because that's so much easier to do," Thiel notes. This is the curse of fluency. Finding new vocabulary to describe genuinely new things is cognitively expensive. The brain prefers the easy path, which is to say, the path that leads to existing categories, even when those categories don't fit.

The Category Problem

The challenge runs deeper than individual word choices. It concerns how categorisation itself shapes perception.

Eleanor Rosch (the UC Berkeley psychologist whose prototype theory transformed cognitive science's understanding of how the mind organises knowledge) showed that we don't understand categories through definitions. We understand them through central examples. When someone says "bird," you think of a robin, not an ostrich. When someone says "search engine," you think of whatever exemplar dominates your mental model.

This matters because prototypes don't just represent categories; they constrain what we can see within them. If your prototype for "search engine" is a system that returns results based on keyword matching, then a system that returns results based on link structure doesn't quite fit. It's technically in the category, but it feels wrong. The category label that should illuminate instead obscures.

Thomas Kuhn (the philosopher and historian of science whose 1962 book The Structure of Scientific Revolutions introduced "paradigm shift" into common vocabulary) made a parallel observation about scientific discovery. Paradigm shifts don't just introduce new facts. They introduce new ways of seeing that make previously invisible phenomena suddenly obvious. Before the shift, scientists literally couldn't see what was in front of them. The old categories organised their perception.

Kuhn's famous example was oxygen. Joseph Priestley and Antoine Lavoisier were looking at the same experimental results. But Priestley, working within the old phlogiston paradigm, saw "dephlogisticated air." Lavoisier saw oxygen. The difference wasn't in the data. It was in the conceptual framework that made sense of the data.

This connects to what economists Colin Camerer, George Loewenstein, and Martin Weber identified in 1989 as the "curse of knowledge," the cognitive bias that makes it difficult to imagine not knowing something once you know it. The curse works in both directions. Experts can't easily reconstruct the novice's perspective. But they also can't easily see past their own categories once those categories are established.

The implication is uncomfortable. The very expertise that allows you to evaluate things efficiently may prevent you from seeing things that don't fit your existing frameworks. Your pattern-recognition, honed by experience, becomes a filter that screens out precisely the signals that matter most.

The Expertise Paradox

This creates what researchers have called the "expertise paradox": the counterintuitive finding that deep knowledge can impair performance on tasks that require seeing past established patterns.

The chess research by Bilalić and colleagues is particularly striking. Expert players, asked to find the best move in positions where both a familiar solution and a shorter unfamiliar one existed, consistently found the familiar solution first, and then reported they were looking for something better while their eye movements showed they kept returning to features of the solution they'd already found. The Einstellung effect wasn't just influencing their choices. It was directing their attention without their awareness.

A 2021 study in the journal Innovation & Management Review examined the same phenomenon in a different context: startup accelerators. Researchers Luciana Barlach and Guilherme Ary Plonski studied directors and managers of Brazilian accelerators, finding that their selection processes functioned as "templates" for recognising potentially successful companies. The templates worked: they allowed efficient filtering. But they also created systematic blind spots for ventures that didn't match established patterns.

The researchers found that even experienced evaluators who believed they were open to novelty showed measurable Einstellung effects when making decisions. The criteria that had worked before kept reasserting themselves, even when explicitly challenged. The habit, as Luchins warned, was mastering the individual.

Mitchell Nathan and Anthony Petrosino, educational psychologists at the University of Wisconsin and Vanderbilt respectively, documented a related phenomenon they call "expert blind spot." Teachers with deep subject-matter knowledge systematically underestimate how difficult concepts will be for novices to learn. Their expertise has automated so many cognitive steps that they literally can't access the intermediate reasoning that learners need. The same depth of knowledge that makes them experts makes them worse at seeing from the learner's perspective.

The pattern repeats across domains. Medical diagnosticians anchor on initial hypotheses. Experienced engineers apply familiar solutions to unfamiliar problems. Seasoned investors pattern-match new opportunities to past successes. In each case, the expertise that enables efficient performance in normal conditions creates blind spots for situations that don't fit the template.

Breaking the Pattern

What actually works to counter these biases? The research points to several interventions, though none are silver bullets.

Luchins himself tested one approach. In some conditions, he had participants write "Don't be blind" on their papers after solving several problems with the complex method. The results were striking: over half of the participants who received this instruction found the simpler solution to subsequent problems. But the instruction only worked when participants understood it as a genuine prompt to reconsider their approach. Those who treated it as just more words to remember showed no improvement.

This suggests that awareness alone isn't sufficient, but awareness combined with an explicit prompt to pause and reconsider can help. The instruction works not by providing information but by interrupting the automatic pattern-matching process.

Duncker's candle problem points to a different intervention: changing how information is presented. When the thumbtacks were shown outside the box rather than inside it, success rates doubled. The small change broke the functional fixedness by making the box available as a separate object rather than defining it by its current contents.

The application to evaluation contexts is direct. If you want to see a startup outside its obvious category, don't start with the category. Ask what specific problem it solves, for whom, and how. Get concrete before getting abstract. The category label is fluent but constraining. Specific details are effortful but revealing.

Cross-domain exposure also helps. Researchers have found that analogies from distant fields can break mental sets that analogies from nearby fields reinforce. If you're evaluating a fintech startup, insights from healthcare or logistics may be more useful than insights from other fintech companies, precisely because they don't activate the same templates.

Several practical techniques emerge from the research:

  • The empty box move: Before evaluating something, strip away its category labels. Describe what it actually does, concretely, without using industry terminology. This is cognitively expensive, which is exactly the point.

  • The novice question: Ask someone unfamiliar with the domain what they notice. Novices often see what experts have learned to filter out. Their disfluency is diagnostic.

  • The "compared to what?" discipline: When you find yourself pattern-matching to a familiar category, force yourself to name specific alternatives and articulate concrete differences. Vague categorisation is the enemy; specificity is the antidote.

  • The Luchins prompt: Literally ask yourself, before making a judgment, "What am I not seeing because of what I already know?" The question doesn't guarantee insight, but it interrupts the automatic process.

  • Deliberate defamiliarisation: Describe familiar things as if you've never encountered them before. This technique, borrowed from literary theory, forces cognitive effort that can reveal assumptions you didn't know you were making.

None of these techniques eliminate the pattern-matching bias. The mechanisms are too deep, too useful, too automatic for that. The goal is not to stop pattern-matching but to create moments where you can see past it when it matters.

The Mechanized Mind

Luchins' water jar experiment is over eighty years old. The problems he identified have not been solved; they've been magnified.

We live in an environment saturated with fluent signals. Algorithms surface content that matches our existing patterns. Buzzwords proliferate because they're cognitively cheap. Categories multiply and harden. The very ease of information processing that technology provides reinforces the tendency to see new things through old frameworks.

The Einstellung effect, Luchins observed, creates a "mechanized state of mind." The phrase feels newly resonant in an age of actual machines: systems that literally work by pattern-matching at scale. Large language models excel at interpolation within existing categories. They struggle with genuine novelty, because genuine novelty is, by definition, outside the training distribution.

The human advantage, if there is one, lies in the capacity to recognise when pattern-matching fails and to do the effortful work of seeing differently. But this capacity requires cultivation. It doesn't happen automatically. The brain's default is efficiency, and efficiency means applying what worked before.

Thiel's buzzword tell, viewed through this lens, is not primarily about startup pitches. It's about a fundamental feature of human cognition encountering situations where that feature becomes a liability. The fluent description is a symptom. The disease is the mechanized mind applying familiar patterns to unfamiliar territory.

"Don't be blind," Luchins told his participants. The instruction helped, but only for those who actually took it seriously, who understood it as a genuine invitation to see differently rather than just more words to process.

Previous
Previous

Tacit knowledge and Generative AI

Next
Next

Is AI killing critical thinking?