Ishiguro, memory & critical thought in the AI age
Drinking from the internet firehose, I recently came across the Nobel Literature acceptance speech from Kazuo Ishiguro. In it Ishiguro described standing before the rubbled remains of the gas chambers at Auschwitz-Birkenau. His Polish hosts faced a dilemma: should perspex domes be built to preserve these relics for future generations, or should the relics be allowed to rot away to nothing? "It seemed to me a powerful metaphor for a larger dilemma," Ishiguro reflected. "How were such memories to be preserved? Would the glass domes transform these relics of evil and suffering into tame museum exhibits? What should we choose to remember? When is it better to forget and move on?"
Ishiguro has built a wonderful literary career exploring how individuals construct false narratives about their own pasts. Stevens the butler in The Remains of the Day, Kathy H. in Never Let Me Go, the painter Ono in An Artist of the Floating World, these are unreliable narrators not because they lie deliberately but because they cannot bear to see clearly. They suppress painful memories, rationalise moral failures, construct dignified stories from undignified facts. The documents of their lives exist; the meaning they ascribe to them is constructed, motivated, and often self-serving.
In that same lecture, Ishiguro extended his concern from individuals to collectives. "Does a nation remember and forget in much the same way as an individual does?" he asked. "What exactly are the memories of a nation? Where are they kept? How are they shaped and controlled?"
I've been thinking about these questions in a different context: artificial intelligence. A field less than a century old, with near perfect documentation, virtually every paper now archived and traceable, and yet somehow with a collective memory that has compressed, distorted, and in some cases simply fabricated its own origin story. The implications touch on how we think, how we verify, and whether we're building tools that will help us remember or help us forget.
A report from Jürgen Schmidhuber cropped up in my feed last week. It connected to the 2025 Queen Elizabeth Prize for Engineering. Four of the seven awardees of the prize, Bengio, LeCun, Hinton, and Hopfield had "repeatedly republished AI techniques whose creators they failed to credit." The accusation was familiar; Schmidhuber has been making variants of it for a number of years, backing his claims with technical reports dense with citations and dates. What struck me this time wasn't the accusation itself but the response: essentially none. The field had moved on. The narrative was settled. The prizes awarded.
The actual history of deep learning is multinational, multi-decade, and complicated. Alexey Ivakhnenko published the first working deep learning algorithm in Soviet Ukraine in 1965. Kunihiko Fukushima introduced the neocognitron, the original convolutional neural network architecture, in Japan in 1979. Shun'ichi Amari published the first deep multilayer perceptron trained by stochastic gradient descent in 1967. Seppo Linnainmaa derived the modern form of backpropagation in his Finnish master's thesis in 1970.
This history exists. It's documented. The papers are on arxiv, the citations are traceable. Yet the field's working memory, what practitioners actually know and reference, has compressed to a simplified story centred on a handful of other figures.
Like Ishiguro's narrators, the field has access to its own history. The suppression happens not at the level of the archive but at the level of attention, interpretation, and transmission. The facts exist; the meaning ascribed to them is constructed.
Ted Gioia, the cultural critic and music historian, offers a framework for understanding how this compression can happen. In his conversations with Tyler Cowen and his "State of the Culture" essays, Gioia argues that music once served as cultural memory, the repository of a society's knowledge and history. "What people don't understand is that, for most of history, music was a kind of cloud storage for societies," he observes. "If you go to any traditional community, and you try to find the historian, generally it's a singer."
That function has increasingly been outsourced to digital systems. The storage is better; but the transmission is worse. We have everything archived and nothing remembered.
Gioia's framework distinguishes between "slow traditional culture" (deep engagement with few works), "fast modern culture" (more content, less depth), and "dopamine culture" (endless stimulation, no retention). The endpoint isn't entertainment but addiction, a culture optimised for engagement rather than comprehension.
Academic science has followed a somewhat similar trajectory. The publish-or-perish environment, extensively documented in the replication crisis literature, selects for speed and volume over depth. A 2024 survey in PLOS Biology found that nearly three-quarters of biomedical researchers believe there is a reproducibility crisis, with "pressure to publish" cited as the leading cause. Researchers don't read; they skim abstracts and cite. They don't trace intellectual lineages; they grab the nearest authoritative-seeming reference.
In this environment, a compressed narrative is functional. If you need to write an introduction situating your work in the field's history, citing "Hinton, LeCun, and Bengio" is fast. Reading Ivakhnenko's 1965 papers, understanding the Soviet context, tracing the actual development, that's slow. The system rewards the fast path.
The result is that the compressed narrative propagates not necessarily through deliberate conspiracy, but through convenience. Each researcher who uses it makes it more available for the next. The citations accumulate. The Wikipedia article reflects the citation pattern. The prize committees read the Wikipedia article. The prizes reinforce the narrative. The narrative becomes "true" through repetition, not verification.
This brings us to the uncomfortable contemporary parallel: the growing reliance on AI tools that themselves depend on compressed, potentially unreliable training narratives.
Matt Webb, the designer and technologist behind the Interconnected blog, has been writing about this intersection for years. In a 2023 post, he described the experience of ChatGPT going down: "There was a coding task I was in the middle of that I literally couldn't complete. Not because I needed API access to GPT-4, but because without ChatGPT I was too dumb to deal with it."
Webb meant this somewhat tongue-in-cheek, but the underlying dynamic is real. When we outsource cognitive labour to systems whose workings we don't scrutinise, we lose the practice that maintains our own capacities. Critical thinking is not a possession but a practice; it atrophies without use.
The Schmidhuber case illustrates this at the institutional level. Thousands of researchers, reviewers, and prize committee members encountered claims about who "invented deep learning." At each point, someone could have asked: invented it relative to what? What existed before? But the friction of that inquiry was high, the social cost of questioning was real, and the compressed narrative was readily available. So the narrative propagated.
This is what happens with AI tool reliance, but faster and at individual scale. When you ask an AI assistant a question, you get an answer with a fluency and confidence that feels authoritative. The friction that would have existed, going to sources, comparing accounts, noticing contradictions is removed. You've retrieved a conclusion without traversing the territory that would have built your capacity to evaluate conclusions.
Webb has proposed that nations need a "Strategic Fact Reserve", trusted, uncontaminated training data as a hedge against the interests embedded in existing AI systems. "Large Language Models Reflect the Ideology of their Creators," he notes, citing a 2024 arXiv paper. "Chinese large language models will give China-appropriate answers; American models American."
The concern extends beyond ideology to simple accuracy. If the training data reflects compressed narratives, if the AI "knows" that Hinton invented deep learning in the same way that Ishiguro`s Stevens "knows" he served a great man, then the tools we use to augment our thinking will propagate the compressions, not correct them.
Atrophied critical thinking makes you more reliant on AI, which accelerates the atrophy.
If you can't evaluate sources yourself, you need AI to summarise them. If you can't hold complexity in your head, you need AI to simplify it. If you can't tolerate confusion, you need AI to resolve it immediately. Each use makes the underlying incapacity worse, which makes the next use more necessary.
This is the structure of dependency, and it operates at both individual and institutional levels.
In the AI field itself, the compressed narrative persists partly because researchers now lack the training to do intellectual history properly. They were trained in a system that already relied on the compressed version, so they don't have the skills or disposition to question it. Schmidhuber's detailed technical reports, tracing specific claims to specific papers with specific dates, represent a mode of scholarship that most AI researchers never learned and can't evaluate.
The practical implications are significant. Lawyers have submitted court filings with hallucinated case citations, cases fabricated by ChatGPT, presented with the fluency of real ones. They didn't check because checking felt redundant. Students submit AI-generated essays they cannot discuss, having accumulated a credential without the competence it was meant to certify. Developers ship code they don't fully understand, assembled from AI suggestions that work until they don't.
In each case, the pattern is the same: an apparently authoritative system offers a shortcut, the shortcut is taken, the practice that would have maintained independent capacity is skipped, and the ability to evaluate or question the system decreases.
Schmidhuber's detailed technical reports, long, dense, full of specific citations, represent a mode of engagement that the dopamine-culture dynamic selects against. Reading them is a choice to practice the slow mode. The effort is the point.
This applies at every scale. For individuals: trace claims to sources. When you encounter a historical claim, ask what evidence supports it. Use AI tools for what they're good at (speed, breadth, pattern-matching) but maintain the practice of verification that keeps your own judgment sharp.
For organisations: don't outsource intellectual due diligence to credentials. The credentials emerge from systems that have demonstrated unreliable collective memory. When evaluating expertise, ask people to explain the provenance of their claims. If they can only cite the last ten years, they may not understand the tools they're using.
For the AI field itself: the figures Schmidhuber champions, Ivakhnenko, Fukushima, Amari, Linnainmaa, are not obscure cranks. They're pioneers whose work is foundational to everything that followed. Acknowledging them accurately isn't political correctness or score-settling; it's basic intellectual hygiene. A field that can't maintain accurate memory of its own recent past has no business claiming to build systems that will shape humanity's future.
Ishiguro's question at Birkenau, "What should we choose to remember?" implies that remembering is a choice, and therefore forgetting is too. His novel The Buried Giant explores this explicitly: an enchanted mist causes collective amnesia across post-Arthurian Britain, and the question is whether this forgetting is a mercy or a crime.
For AI history, the answer is clear. This is not a case where forgetting enables peace. This is a case where forgetting enables continued misallocation of credit, continued narrowing of the field's intellectual range, continued atrophy of the critical capacities we'll need as AI systems become more consequential.
Stevens, at the end of The Remains of the Day, looks back on a life spent in service to a man whose values he never questioned, and realises too late what he failed to see. The recognition comes, but the years are gone.
The documents of AI's history exist. The primary sources are available. The question is whether we'll become the kind of people who bother to read them—or whether we'll accept the compressed narrative and move on, practising the intellectual passivity that makes us ever more dependent on systems we don't understand, built on histories we never verified.
The effort of checking is the exercise that maintains the capacity to check. Skip it habitually, and the capacity atrophies. This is true for individuals, for institutions, and for fields.
Ishiguro's unreliable narrators discover, eventually, what they've been hiding from themselves. The recognition is painful but necessary. For the rest of us, the choice is whether to wait for that moment of forced clarity, or to practice seeing clearly now, while there's time to act on what we find.
-
Ishiguro, K., Nobel Lecture, "My Twentieth Century Evening – and Other Small Breakthroughs" (2017)
Schmidhuber, J., "A Nobel Prize for Plagiarism," Technical Report IDSIA-24-24, IDSIA (2024)
Gioia, T., Interview with Tyler Cowen, Conversations with Tyler (2020)
Cobey, K.D. et al., "Publish or perish blamed for reproducibility crisis," PLOS Biology (2024)
Webb, M., "The need for a strategic fact reserve," Interconnected (2025)