Playing Telephone with AI: The Educational Consequences of Whispering to the Machine

Courtesy of Reddit user: Foreign_Builder_2238

Two weeks ago, whilst at breakfast, on my daily morning doom-scroll through social media, I came across a video which struck me as unusual as it did oddly terrifying.

The video was a montage of images (pictured above) of the popular American actor Dwayne ‘The Rock’ Johnson, which had been fed through AI and recreated. The AI-created image was successively fed back into AI and asked to be recreated again, one hundred times over. What results is like an acid-induced, nightmarish vision of abstraction, in which all physical characteristics of the original image degrade into obscurity, and what is left is an unrecognisable arrangement of primary colours and shapes. The video plays like a Cronenbergian body-horror: an animated flip-book-style hell-vision of the degradation of Hollywood machismo, into a creepy clown, before bending and contorting into its final Picasso-esque form.

Is this the fate of conceptual thinking?
Courtesy of Reddit user: Foreign_Builder_2238

While the context of the video was comedic, and it did offer a momentary chuckle at its absurdity, I found myself morbidly thinking later about what this represents about how machines perceive human reality, and the purposes for which we use them. More specifically, as an educator, what does this mean in an age where there is growing fear about the ways in which AI is used in education? One recent article in The Guardian estimated 92% of university students in the UK are using generative AI in some form in their assignments. Remember some of these students will almost certainly become the scholars of tomorrow. The video has begun to pose a deeper question to me: if this is how generative AI reshapes an image through cycles of reiteration, what does it do to ideas?

Inspired by the video, I decided to conduct my own experiment to find out and play a game of Telephone with AI. I want to see how successive cycles of iteration affect the form and meaning of rich, linguistically dense writing, and find out: what happens to an idea when this machine-generated version is reinterpreted again and again by AI as a secondary source? Whilst citation decay as a phenomenon is nothing new in academic fields, and is already well-studied, I want to find out the ways AI might propel conceptual drifts from a source concept.

The Experiment

After a Google search for a fitting quote on technology to be used in my experiment, I settled on a seventy-seven-word passage famously attributed to John M. Culkin (1967) reviewing the work of communications specialist, Marshall McLuhan.

“Life imitates art. We shape our tools and thereafter they shape us. These extensions of our senses begin to interact with our senses. These media become a massage. The new change in the environment creates a new balance among the senses. No sense operates in isolation. The full sensorium seeks fulfilment in almost every sense experience. And since there is a limited quantum of energy available for any sensory experience, the sense-ratio will differ for different media.”


The opening passage is as irresistible to me linguistically as it is conceptually. The extract is dense with rhetoric and metaphor, and was selected intentionally for its philosophical and profound subtext. This was ideal to observe how subtle changes of form, yet significant changes in meaning, can occur through repeated AI paraphrasing. As each subsequent paraphrase was fed through Open AI’s latest model of Chat GPT (GPT-4o at the time of writing), what emerged over the course of 100 cycles of AI reiteration created unease in both its subtlety and its implications.

The popular children’s game, Telephone, in which a child whispers a word into the next child’s ear, before passing their interpretation of what they heard onto the next.

The Results

From the outset, the poetic nuance and metaphorical elegance of the original were stripped away almost instantaneously. In the first cycle, the complete idea of “Life imitates art” is removed without a trace from the paraphrase. By the fifth cycle, media as a playful “massage” had vanished entirely. The AI had transformed these metaphors into literal, concrete explanations. The proverbial dance expressed between humanity and its tools emphasised in the original became simply a formulaic statement of ‘sensory enhancement’ and ‘perception shift’. Much like the renditions of Dwayne Johnson I saw on social media, the language had lost its original identity, and now represented something other.

Perhaps most striking was how quickly this occurs. By cycle 10 all original attribution to John M. Culkin was erased. What began as a richly metaphorical insight into human-technology interaction had become an anonymous, generalised assertion lacking historical and intellectual grounding. The implications are clear. If AI-interpreted readings and AI-generated texts increasingly populate academic discourse, there is a danger that critical, complex ideas could become compressed, smoothed, and simplified to the point they are stretched further and further away from their intellectual lineage, and ultimately further from the reality which they study.

The original text compared to the AI-paraphrased text after 100 reiterations.

Between cycles ten and thirty, something interesting happened. The text seems to stabilise into a repetitive formula of language. Terms like ‘enhance’, ‘reshape’, and ‘transform’ begin recurring predictably. It is as if AI finds a comfort zone. This plateau occurs where its original complexity seems to have been completely replaced by safe and predictable explanations. AI’s tendency to rationalise rich metaphors into neutral statements devoid of creative human thinking demonstrates a key limitation of machine-generated thought.

Beyond cycle thirty, iterations reach a terminal point where the interpretation stabilises. Almost every cycle after offers only a minor variation of the previous one. After this point no meaningful evolution conceptually occurs. The text had reached an equilibrium at which it was devoid of any linguistic innovation.

Loss or Replacement?

It is tempting to describe what happened in this experiment as loss. The metaphors vanished. The attribution disappeared. The complexity flattened. But ‘loss’ implies something was misplaced or accidentally discarded — as if the AI fumbled the baton. What actually occurred was more deliberate than that, and more troubling. The machine did not lose Culkin’s idea. It replaced it with its own version of what an idea about technology should look like.

Return, for a moment, to the Dwayne Johnson video. The AI did not randomly corrupt that image. It regenerated it — cycle after cycle — according to its internal model of what a human face is. Each iteration moved the image closer to that model and further from the specific, irreducible reality of the original photograph. The degradation was not noise. It was convergence. The machine was resolving the image toward its own understanding, and in doing so, it erased everything that made the original singular.

The same process governed the text. Each paraphrase was not a failed attempt to preserve Culkin’s meaning. It was a successful attempt to produce the kind of meaning the machine is capable of producing. And that kind of meaning has a very particular shape: it is propositional, it is literal, it is generalised, and it is stripped of the perspectival depth that made the original thought worth reading in the first place.

This distinction matters enormously. There is a long-standing difference in philosophy between two kinds of knowing. There is propositional knowledge — knowing that something is true. “Humans shape their tools and their tools shape them back” is a proposition. It can be verified, paraphrased, and taught as a fact. And then there is something deeper: understanding — grasping not just what is true, but why it is true, in context, from a particular vantage point, with all the texture and tension that entails. Culkin’s passage was not merely a proposition about technology. It was an act of understanding, expressed through metaphor and rhythm and deliberate provocation. “These media become a massage” is not a claim you can fact-check. It is an invitation to feel something about the relationship between human beings and their tools. The AI could not hold that invitation. It could only convert it into something it could hold — a clean, neutral, repeatable statement. And in doing so, it performed a quiet epistemological substitution: it replaced understanding with information.

This is not a flaw in the machine. It is a structural feature. Generative AI operates by pattern, by statistical relationship, by the aggregation of what has already been said. It has no lived experience from which to interpret the world. It has no cultural or intellectual standpoint. It cannot feel the strangeness of Culkin’s metaphor because strangeness requires a sense of the familiar to depart from — and that sense only comes from having actually inhabited a world. The machine is not ignorant of nuance in the way a lazy student might be. It is structurally incapable of producing or preserving it. The convergence we observed in the experiment is not an accident. It is what happens when a tool that can only produce one kind of knowing is asked, again and again, to interpret a kind of knowing it cannot access.

What does AI-fuelled conceptual drift mean for Academia?

If AI-driven paraphrasing becomes common practice in higher education, the implications extend well beyond citation integrity. Researchers could inherit concepts which have been subtly converted; not degraded, but fundamentally translated; from one epistemological register to another. A rich, perspectival argument could arrive on a student’s desk already flattened into a neutral assertion, with no seam to indicate that something was lost in the process. The idea would look fine. It would even look polished. But it would no longer be doing the same kind of intellectual work.

It is important to acknowledge that human-attributed cases of citation decay and conceptual drift are already well-documented phenomena within academia. The children’s game of Telephone provides a simplified analogy of how human error can naturally lead to cumulative distortion in knowledge. Academics encounter conceptual drift as ideas pass from author to author, who each reinterpret and reshape the original concept according to their own perspective and biases. It would be wilfully ignorant not to ask: who’s to say a human would perform any better in the experiment?

But here the comparison breaks down in an important way. When a human scholar reinterprets an idea, they do so from somewhere. Their biases, their cultural context, their lived experience — these are not contaminants to be removed from the process. They are the very conditions under which understanding happens. In academia, these shifts usually occur slowly, in an incremental fashion, often spanning years or even decades. This measured pace gives academics the chance to reassess and correct misunderstandings periodically. Beyond this, understanding these shifts through positionality and reflexivity can even add to the value of academic literature. New perspectives created by conceptual drifts can revive a theoretical concept which has been long forgotten and further its relevance across generations.

In the field of education, critical pedagogy found a new home in North America in the early 1990s in the form of culturally responsive teaching, whereas John Dewey’s experiential learning was reconceptualised in project-based learning around 40 years after it was originally proposed. These were not corruptions of the original ideas. They were reinterpretations — shaped by the particular historical and cultural conditions of the people who encountered them. They carried the fingerprints of real human understanding.

Machines leave no such fingerprints. They are apositional — they have no lived experience to draw upon when interpreting the world. They have no cultural or intellectual standpoint from which to reinterpret ideas meaningfully. The result is not a new perspective on an old idea. It is the idea with its perspective surgically removed. And this is precisely what makes AI-driven conceptual drift so different in kind from its human equivalent. Human drift is, at worst, a distortion. At best, it is an evolution. AI drift, in its current form, is a reduction — a systematic flattening of the epistemological landscape into a single, featureless terrain.

Furthermore, it is important to recognise that AI’s use in academia still goes largely undisclosed. At the same time, the conceptual drift demonstrated in the experiment dilutes, distorts, and misrepresents original ideas at the mere click of a button. The rapidness of the issue is unprecedented. This could lead to an abundance of oversimplified ideas or unintended shifts in ideology which won’t be fully understood. No one is positioned to recognise, let alone positioned to fix, the issue in front of them. The speed and stealth of the problem poses a compounding threat on global academic culture.

The game of Telephone is a metaphor for both human and AI-driven citation behaviour. However, out of the comparison we can make an important distinction. Human scholars have opportunities to pause, reflect, and return to the original source, make attempts to actively mitigate drift through deliberate reinterpretation. In contrast, the rapid and uncritical interpretations of generative AI can irreversibly erase nuance in meaning in an instant. This makes light of the urgency for vigilance as academia adapts to dealing with the inevitable integration of the powerful but potentially reductive tools we now have at our disposal.

What can we do about it?

If the problem is epistemological; if AI does not merely simplify ideas but converts them into a different kind of knowing altogether; then the solution must be epistemological too. It is not enough to teach students to cite more carefully or to resist copy-pasting. We must teach them to recognise the difference between the kind of knowledge a machine can produce and the kind of knowledge that actually drives intellectual progress. We must, in other words, teach them to notice when understanding has been replaced by information.

Asking the question ‘what can humans do that machines can’t?” could go some way in finding the answers we need. We have already established that writing passionately about issues we care deeply about is beyond the capabilities of the current machines available. But the question goes deeper than passion. It is about the capacity to interpret the world from a position — to bring something of oneself to bear on an idea, and to recognise that this is not a weakness in the argument but its very source of life.

To confront the challenge of AI-propelled conceptual drift, the academia of tomorrow needs to lean into the development of positionality and reflexivity as a key part of academic education. This means moving beyond treating academic writing as a practice in which neutral thinking can be afforded. Instead, reflexive statements need to be built into essays, and students need to be encouraged to explore how their backgrounds and values influence their arguments. But this is no longer simply a pedagogical recommendation. It is an epistemological imperative. If machines can only produce the kind of knowledge that is free of perspective, then the preservation of perspectival knowledge — knowledge that is situated, embodied, and alive with the tension of a particular human vantage point — becomes the central task of education.

Greater literacy of epistemological systems could help students understand not only what knowledge is, but also where it came from, and how it was constructed. Students could be asked to investigate the lineage of ideas to their concept root — to trace not just the argument, but the conditions under which it was made. Developing students’ understanding of how ideas have evolved over time would give young learners a greater appreciation for the importance of the transformation of thought, and would also develop an appreciation for their intellectual heritage and give an understanding of why they hold the worldview they do.

Additionally, commitment to the integrity of citation remains hugely important to slowing an acceleration in conceptual drift. One important method might be setting tasks making primary-source analysis mandatory to prevent a ‘Dwayne Johnson-effect’ in academic literature — preventing the negative effects of cycles of AI simplification by ensuring students always return to the original, not to a machine’s interpretation of it.

Final Words

Overcoming the issues AI poses doesn’t need to be painfully disruptive. In fact it could be used as an opportunity to heal societal fissures. Setting students’ academic assignments focused closely on solving real-world case scenarios wouldn’t only inhibit AI’s ability to provide shortcuts, but it would bridge a gap between theory and practice. It would also better prepare students for the messy, complex world outside university, and could even inspire novel solutions to real problems.

But if we fail to do this, and develop a culture of offloading academic work onto tools that smooth over complexity, we risk more than weakened education. The simplification of complex ideas into seemingly objective truths goes to the heart of a very real problem within modern society. The current era is stricken by media-fuelled polarisation driven by the oversimplification of complex issues. AI-driven reduction of complexity has very real potential to destabilise society even further. Academia should make efforts to emphasise the truth in paradox, and resist simplistic evaluations which fail to acknowledge multiple perspectives.

The AI-reinterpretation of academic literature represents more than just a loss of voice. As an English teacher, I know the combination of words on a page are more than just a matter of style. But now the stakes feel different, and the reason is epistemological. It is not merely that AI cannot replicate the way we write. It is that AI cannot replicate the kind of knowing that makes certain writing matter. Our perspectives of the world are formed slowly over time, through years of experience. Our joys and pains echo in the words we write. They are reflections of a life lived. Machines don’t experience the world as we do. They can only aggregate the perspectives we have lived to acquire, and in doing so, they flatten them into something universalised and featureless.

What the experiment revealed; what the Dwayne Johnson video hinted at before I ever ran a single paraphrase; is that generative AI does not simply fail to preserve complexity. It converges away from it, systematically, structurally, toward a stable equilibrium of safe and generic knowing. Instead of fading, the idea is rewritten into a form the machine can hold.

The stakes are high in the game of Telephone we play with future generations. Preserving the character of our voice is more important than just a case of academic integrity. It is a matter of preserving the very kind of knowledge that makes human understanding possible; knowledge that is situated, perspectival, and irreducibly ours. The machine will always converge. Our task is to ensure that, alongside it, we do not.

Further Reading

European Parliamentary Research Service (EPRS), (2019). Polarisation and the Use of Technology in Political Campaigns, (Brussels, Belgium). [Online]

Hovakimyan, G. & Bravo, J. M. (2024) Evolving Strategies in Machine Learning: A Systematic Review of Concept Drift Detection. Information (Basel). [Online] 15 (12), 786-.

Mitchell, M., (2020). Artificial intelligence: A guide for thinking humans. Penguin paperback ed. (London, England) Penguin.

Simkin, M. & Roychowdhury, V. (2006) Do you sincerely want to be cited? Or: read before you cite. Significance (Oxford, England). [Online] 3 (4), 179–181.

van Niekerk, J. et al. (2025) Addressing the use of generative AI in academic writing. Computers and education. Artificial intelligence. [Online] 8100342-.

Jamie Dinler

MA Education student at UCL, and Secondary IGCSE Teacher.

Previous
Previous

Geneva Babies: From Switzerland with Privilege

Next
Next

Letters from the Liminal: A Teacher’s Lessons from Orson Welles