Playing Telephone with AI: The Educational Consequences of Whispering to the Machine
Courtesy of Reddit user: Foreign_Builder_2238
Two weeks ago, whilst at breakfast, on my daily morning doom-scroll through social media, I came across a video which struck me as unusual as it did oddly terrifying.
The video was a montage of images (pictured above) of the popular American actor Dwayne ‘The Rock’ Johnson, which had been fed through AI and recreated. The AI-created image was successively fed back into AI and asked to be recreated again, one hundred times over. What results is like an acid-induced, nightmarish vision of abstraction, in which all physical characteristics of the original image degrade into obscurity, and what is left is an unrecognisable arrangement of primary colours and shapes. The video plays like a Cronenbergian body-horror: an animated flip-book-style hell-vision of the degradation of Hollywood machismo, into a creepy clown, before bending and contorting into its final Picasso-esque form.
Is this the fate of conceptual thinking?
Courtesy of Reddit user: Foreign_Builder_2238
While the context of the video was comedic, and it did offer a momentary chuckle at its absurdity, I found myself morbidly thinking later about what this represents about how machines perceive human reality, and the purposes for which we use them. More specifically, as an educator, what does this mean in an age where there is growing fear about the ways in which AI is used in education? One recent article in The Guardian estimated 92% of university students in the UK are using generative AI in some form in their assignments. Remember some of these students will almost certainly become the scholars of tomorrow. The video has begun to pose a deeper question to me, if this is how generative AI reshapes an image through cycles of reiteration, what does it do to ideas?
Inspired by the video, I decided to conduct my own experiment to find out, and play a game of Telephone with AI. I want to see how successive cycles of iteration affect the form and meaning of rich, linguistically dense writing, and find out: what happens to an idea when this machine-generated version is reinterpreted again and again by AI as a secondary source? Whilst citation decay as a phenomenon is nothing new in academic fields, and is already well-studied, I want to find out the ways AI might propel conceptual drifts from a source concept.
The Experiment
After a Google search for a fitting quote on technology to be used in my experiment, I settled on a seventy-seven-word passage famously attributed to John M. Culkin (1967) reviewing the work of communications specialist, Marshall McLuhan.
“Life imitates art. We shape our tools and thereafter they shape us. These extensions of our senses begin to interact with our senses. These media become a massage. The new change in the environment creates a new balance among the senses. No sense operates in isolation. The full sensorium seeks fulfilment in almost every sense experience. And since there is a limited quantum of energy available for any sensory experience, the sense-ratio will differ for different media.”
The opening passage is as irresistible to me linguistically as it is conceptually. The extract is dense with rhetoric and metaphor, and was selected intentionally for its philosophical and profound subtext. This was ideal to observe how subtle changes of form, yet significant changes in meaning, can occur through repeated AI paraphrasing. As each subsequent paraphrase was fed through, what emerged over the course of 100 cycles of AI reiteration created unease in both its subtlety and its implications.
The popular children’s game, Telephone, in which a child whispers a word into the next child’s ear, before passing their interpretation of what they heard onto the next.
The Results
From the outset, the poetic nuance and metaphorical elegance of the original were stripped away almost instantaneously. In the first cycle, the complete idea of “Life imitates art” is removed without a trace from the paraphrase. By the fifth cycle, media as a playful “massage” had vanished entirely. The AI had transformed these metaphors into literal, concrete explanations. The proverbial dance expressed between humanity and its tools emphasised in the original became simply a formulaic statement of ‘sensory enhancement’ and ‘perception shift’. Much like the renditions of Dwayne Johnson I saw on social media, the language had lost its original identity, and now represented something other.
Perhaps most striking was how quickly this occurs. By cycle 10 all original attribution to John M. Culkin was erased. What began as a richly metaphorical insight into human-technology interaction had become an anonymous, generalised assertion lacking historical and intellectual grounding. The implications are clear. If AI-generated texts increasingly populate academic discourse, there is a danger that critical, complex ideas could become compressed, smoothed, and simplified to the point they are disconnected from their intellectual lineage, and ultimately from the reality which they study.
The original text compared to the AI-paraphrased text after 100 reiterations.
Between cycles ten and thirty, something interesting happened. The text seems to stabilise into a repetitive formula of language. Terms like ‘enhance’, ‘reshape’, and ‘transform’ begin recurring predictably. It is as if AI finds a comfort zone. This plateau occurs where its original complexity seems to have been completely replaced by safe and predictable explanations. AI’s tendency to rationalise rich metaphors into neutral statements devoid of creative human thinking demonstrates a key limitation of machine-generated thought.
Beyond cycle thirty, iterations reach a terminal point where the interpretation stabilises. Almost every cycle after offers only a minor variation of the previous one. After this point no meaningful evolution conceptually occurs. The text had reached an equilibrium at which it was devoid of any linguistic innovation.
What does AI-fuelled conceptual drift mean for Academia?
If AI-driven paraphrasing becomes common practice in higher education, there is a risk researchers could inherit concepts which have been diluted without knowing. The impact on academia could be profound if proper counters and measures aren’t in place. As ideas become removed from an origin which had nuance, the ability for academia to tackle real-world issues could become weakened. Education systems which interact with generative AI must grapple with serious epistemological issues. Understanding how to preserve depth, complexity, and critical context in knowledge when our tools, in their current form, favour simplification is a real challenge.
It's important to acknowledge that human-attributed cases of citation decay and conceptual drift are already well-documented phenomena within academia. The children’s game of Telephone provides a simplified analogy of just how human-error can naturally lead to a cumulative distortion in knowledge. Academics encounter conceptual drift as ideas pass from author to author, who each reinterpret and reshape the original concept according to their own perspective and biases. It would be wilfully ignorant not to ask: who’s to say a human would perform any better in the experiment?
In academia, these shifts usually occur slowly, in an incremental fashion. Often they span years or even decades. This measured pace gives academics the chance to reassess and correct misunderstandings periodically. Beyond this, understanding these shifts through positionality and reflexivity can even add to the value of academic literature. New perspectives created by conceptual drifts can revive a theoretical concept which has been long forgotten and further its relevance across generations. In the field of education, critical pedagogy found a new home in North America in the early 1990s in the form of culturally responsive teaching, whereas John Dewey’s Experiential learning was reconceptualised in project-based learning around 40 years after it was originally proposed.
However, it’s important to recognise AIs use in academia still goes largely undisclosed. At the same time the citation decay demonstrated in the experiment dilutes, distorts and misrepresents original ideas at the mere click of a button. The rapidness of the issue is unprecedented. This could lead to an abundance of oversimplified ideas or unintended shifts in ideology which won’t be fully understood. No-one is positioned to recognise, let alone positioned to fix the issue in front of them. The speed and stealth of the issue poses a compounding threat on global academic culture.
The game of Telephone is a metaphor for both human and AI-driven citation behaviour. However, out of the comparison we can make an important distinction. Human scholars have opportunities to pause, reflect and return to the original source, make attempts to actively mitigate drift through deliberate reinterpretation. In contrast, the rapid and uncritical interpretations of generative AI can irreversibly erase nuance in meaning in an instance. This makes light of the urgency for vigilance as academia adapts to dealing with the inevitable integration of the powerful but potentially reductive tools we now have at our disposal.
What can we do about it?
Trying to imagine a future which accommodates such a radical change is not easy, purely for the vision it requires. But the questions we ask ourselves don’t have to be complex. Asking, ‘what can humans do that machines can’t?’ could go some way in finding the answers we need. We have already established that writing passionately about issues we care deeply about is beyond the capabilities of the current machines available. Could the academic literature of tomorrow show a greater appreciation and emphasis for creative and lucid language? If we are serious about distinguishing ourselves by only things we can do, then writing with clarity and conviction and complexity must be valued as a method of thinking in itself, not just a stylistic choice.
To confront the challenge of AI-propelled conceptual drift, the academia of tomorrow needs to lean into the development of positionality and reflexivity as a key part of academic education. This means moving beyond treating academic writing as a practice in which neutral thinking can be afforded. Instead, it means reflexive statements need to be built into essays, and students need to be encouraged to explore how their backgrounds and values influence their arguments. For this to be successful, greater literacy of epistemological systems is needed to help students understand not only what knowledge is, but also where it came from, and how it was constructed. By placing voice and perspective into academic study, education can preserve the human voice in academic writing.
Additionally, commitment to the integrity of citation is hugely important to slowing an acceleration in conceptual drift. One important method might be setting tasks making primary-source analysis mandatory to prevent a ‘Dwayne Johnson-effect’ in academic literature. This would prevent the negative effects of cycles of AI simplification. In addition, students could be asked to investigate the lineage of ideas to their concept root. Developing students’ understanding of how ideas have evolved over time wouldn’t only give young learners a greater appreciation for the importance of the transformation of thought, but it would also develop an appreciation for their intellectual heritage and give an understanding of why they hold the worldview they do.
Final Words
Overcoming the issues AI poses doesn’t need to be painfully disruptive, in fact it could be used as an opportunity to heal societal fissures. Ensuring there is an overlap between the solutions we create, and the broader issues we face in society is an opportunity to forge positive change. AI cannot make ethical judgements and cannot assess what should be. These decisions are inspired by values rooted in human experience. Neither can AI physically campaign, mobilise support, or advocate for change in society. Setting students’ academic assignments focused closely on solving real-world case scenarios, wouldn’t only inhibit AI’s ability to provide shortcuts, but it would bridge a gap between theory and practice. It would also better prepare students for the messy complex world outside university, and could even inspire novel solutions to real problems. However, if we fail to do this, and develop a culture of offloading academic work onto tools that smooth-over complexity, we risk more than weakened education. We risk future generations losing the ability to manuever the world we aim to understand.
The simplification of complex ideas into seemingly objective truths is one that goes to the heart of a very real problem within modern society. The current era is stricken by media-fuelled polarisation driven by the oversimplification of complex issues. AI-driven reduction of complexity has very real potential to destabilise society even further. Academia should make efforts to emphasise the truth in paradox, and resist simplistic evaluations which fail to acknowledge multiple perspectives. Conversely, we should also be aware that there might be some who stand to benefit from a simplified, polarised, unstable society, and who might look to erode the ability of future generations to think with nuance. Preparing to resist those with malignant intent would be wise.
The AI-reinterpretation of academic literature represents more than just a loss of voice. As an English teacher, I know the combination of words on a page are more than just a matter of style. Loss of positionality and reflexivity represent a deeper issue: a loss of soul. Our perspectives of the world are formed slowly over time, through years of experience. Our joys and pains echo in the words we write. They are reflections of a life lived. Machines don’t experience the world as we do. They can only replicate the perspectives we have lived to acquire. The stakes are high in the game of Telephone we play with future generations. Preserving the character of our voice is more important than just a case of academic integrity, it’s a matter of human continuity.
Further Reading
European Parliamentary Research Service (EPRS), (2019). Polarisation and the Use of Technology in Political Campaigns, (Brussels, Belgium). [Online]
Hovakimyan, G. & Bravo, J. M. (2024) Evolving Strategies in Machine Learning: A Systematic Review of Concept Drift Detection. Information (Basel). [Online] 15 (12), 786-.
Mitchell, M., (2020). Artificial intelligence: A guide for thinking humans. Penguin paperback ed. (London, England) Penguin.
Simkin, M. & Roychowdhury, V. (2006) Do you sincerely want to be cited? Or: read before you cite. Significance (Oxford, England). [Online] 3 (4), 179–181.
van Niekerk, J. et al. (2025) Addressing the use of generative AI in academic writing. Computers and education. Artificial intelligence. [Online] 8100342-.