Copy of a Copy: Information Fidelity and Dilution
- Ben Rossi
- 6 days ago
- 5 min read
Updated: 4 days ago

By Ben Rossi
Enter AI—Enter Also Assumptions About AI
It seems that the public thinks the biggest problem with AI is that “people will use less critical thinking.” That’s a cute assessment, one where you still believe that big tech companies are your friends. The truth is scarier: as a society, we are being conditioned to accept “truth” as an algorithm, not as the cross-generational effort that it always has been.
This shift fundamentally alters our relationship with information fidelity, or the degree to which a copy retains the precision of its source. Throughout history, humanity has battled to record information, a process akin to taking a photo where higher resolution equals deeper nuance. Ancient tools like stone tablets produced a "blurry" low-resolution image of reality, capturing a king's victory but missing the color of the soldiers' fear or the grain of the political landscape. As civilization advanced, so did our cameras. American historian of science James Gleick notes that precise writing, the printing press, and audio recording moved us from jagged sketches to high-definition archives (Gleick). We seemed to be approaching a point where we could preserve reality forever.
Then came Artificial Intelligence, introducing a strange reversal. Instead of increasing resolution, we built machines designed to lower it. AI functions not as a librarian but as a compression algorithm, chewing up complex information and spitting it out in bite-sized chunks. When an AI summarizes a political treaty or simplifies a philosophical text, it actively downscales the image, stripping away the specificities that make it true. We are not just losing information; we are automating the process of blurring it, trading high-fidelity truth for a convenient, low-resolution approximation. Bernard Marr and reports from IBM suggest this reliance on compression threatens our ability to perceive reality itself (Marr; IBM).
The “Telephone” Effect
The decay of information fidelity over time emerges as a natural law, perfectly illustrated by the children's game "Telephone." Communication theory calls this iterative transmission. According to engineers and mathematicians Claude Shannon and Warren Weaver, each retelling introduces human error, or "noise," that distorts the message (Shannon and Weaver). Misheard syllables and forgotten details accumulate as the original signal fades, turning a clear phrase into gibberish by the final iteration.
Whether through playground games or viral tweets, the "Telephone Effect" ensures that the farther a message travels from its source, the more it is reshaped by every mind it passes through. Modern AI runs this game on the scale of the entire internet. Trained on already-compressed text like headlines and summaries, AI will recompress it further into new outputs that people repost as "sources." Research by AI and security researcher Ilya Shumailov and his team indicates that when models are retrained on their own content, they suffer "model collapse," forgetting specific details and converging on bland averages (Shumailov et al.). The model eats its own tail, losing the original resolution.
The Epistemic Ouroboros
When looking at only a single answer, this looks harmless. An AI takes a dense article and turns it into a clean bullet list "for busy readers." But every simplification discards low-frequency details that do not fit the most statistically typical pattern. Over time, as more people rely on those compressed outputs instead of the source material, the ecosystem of available text becomes dominated by summaries of summaries. When a new model is trained on that environment, it is like taking a screenshot of a copy instead of photographs. The image may look sharp at a glance, but the entire texture of meaning has already been washed out (Marr; IBM). AI does not just participate in the Telephone effect; it industrializes it.
Humans Complete the Loop
This process would be far less dangerous if humans treated AI outputs as the systematic dilution of information that they are. Instead, many readers encounter AI-generated text and assume it represents a direct human judgment or a faithful compression of expert consensus. People will quote it in essays, repost it on social media, and use it as the basis for further questions, turning a thoughtless printout into a new "source." The Telephone chain now runs from model to human to model, with each hop adding interpretation, omission, and bias. The result is not just misinformation in the sense of "wrong facts," but a subtler flattening of all nuance in general. Only what compresses well survives.
This has a corrosive effect on our sense of what it means for something to be "true." If an AI answer becomes the default way people first meet a complex idea, deeper research starts to feel excessive. We acclimate to low-resolution explanations and begin to prefer them, without ever knowing there was definition to be lost in the first place.
How Not to Get Eaten
Escaping the ouroboros-like nature of text-based AI does not require abandoning it, but it does require refusing to take AI output as truth. Stanford University-based historian of education Sam Wineburg and digital literacy researcher Sarah McGrew's findings on "lateral reading" demonstrate that even brief training in checking multiple independent sources before accepting an online claim dramatically improves people's ability to spot bad information (Wineburg and McGrew). Applied to AI, this means treating any generated answer as nothing more than a barbershop opinion: a quick overview that must be cross checked against primary or expert sources before being believed. If you cannot find or do not bother to open those sources, you are choosing to stay inside the game of Telephone.
On a practical level, there are three simple habits that keep you from being absorbed into the feedback loop. First, always ask "according to whom?" If an AI cannot point you to primary sources, texts, authors, or data, treat its output as speculation, not knowledge. Second, resist endless summarization. If you find yourself asking for shorter and shorter versions of something you have never actually read, recognize that you are trading away resolution for convenience. Third, build the muscle of reconstruction. Take any concept you know well, restate the idea in your own words before comparing it to an AI explanation, and very quickly it will show you how little it understands. These moves allow you to take back agency in the chain rather than remaining a passive receiver. In a world where machines and humans are continuously whispering into each other's ears, the only way to preserve information fidelity is to notice the blur, wipe the lens of digital haze, and focus on the original image once again.
Works Cited
Gleick, James. The Information: A History, a Theory, a Flood. Pantheon, 2011.
IBM. "What Is Model Collapse?" IBM Think Blog, 13 Oct. 2024.
Marr, Bernard. "Why AI Models Are Collapsing And What It Means For The Future Of Technology." Forbes, 18 Aug. 2024.
Shannon, Claude E., and Warren Weaver. The Mathematical Theory of Communication. University of Illinois Press, 1949.
Shumailov, Ilya, et al. "AI Models Collapse When Trained on Recursively Generated Data." Nature, 23 July 2024.
Wineburg, Sam, and Sarah McGrew. "Lateral Reading and the Nature of Expertise: Reading Less and Learning More When Evaluating Digital Information." Teachers College Record, vol. 121, no. 11, 2019.
Ben Rossi is a freshman at CT State Tunxis.