
The internet died many years ago, now you interact with bots and not the humans, a theory cites evidence (Image generated by Gemini)
The Dead Internet Theory has emerged as a provocative concept in recent years, suggesting that much of our online interactions may be with artificial entities rather than real humans. This theory proposes that the internet is increasingly dominated by bot activity and AI-generated content, with some claims that as much as half of all web traffic may be non-human.
The core premise suggests that the internet functionally “died” around 2016-2017, after which it became a simulation of human activity designed to manipulate users through algorithmic curation and automated content creation.
While some aspects of this theory reflect genuine technological developments, the comprehensive conspiracy elements lack substantial evidence. We try here to examine the origins, components, evidence, and implications of the Dead Internet Theory for those hoping really to interact on the Internet.
How it all began?
The Dead Internet Theory began gaining traction in niche internet communities before expanding to mainstream awareness.
According to available information, the theory’s precise origin is difficult to pinpoint, but a significant milestone occurred in 2021 when a user named “IlluminatiPirate” published a post titled “Dead Internet Theory: Most Of The Internet Is Fake” on the forum Agora Road’s Macintosh Cafe esoteric board[4].
This post claimed to build upon previous discussions from the same board and from Wizardchan, marking the beginning of the theory’s spread beyond these initial image boards.
The concept resonated with users who had observed peculiar patterns in online interactions and content that seemed suspiciously artificial or repetitive. Online forums like Reddit and 4chan became early incubators for these discussions, where users shared anecdotes about mechanical-seeming interactions and spam-like content that appeared to follow predictable patterns[3].
The theory gradually percolated through internet subcultures before reaching wider audiences through coverage on popular YouTube channels and mainstream media outlets.
A pivotal moment in its popularisation came with an article in The Atlantic titled “Maybe You Missed It, but the Internet ‘Died’ Five Years Ago,” which has been extensively referenced in subsequent discussions of the theory[4]. This article helped legitimise discussion of the concept, transitioning it from fringe conspiracy theory to a topic worthy of serious consideration in technological discourse.
By 2018, the theory had evolved into a broader conversation about online engagement and the growing role of automation in shaping our digital experiences, reflecting genuine concerns about the changing nature of the internet[3].
If The Dead Internet is really a theory, what are its core components?
It has the premise of machine-generated content
At its foundation, the Dead Internet Theory posits that the internet is no longer predominantly populated by genuine human-created content and interactions. Instead, proponents argue that most online content is now generated by machines, including social media posts, comments, articles, and even visual content such as images and videos[1].
This theory suggests that what was once a vibrant ecosystem of human discourse has degraded into a simulation, where algorithms and automated systems create the illusion of human activity while actually minimizing organic human engagement.
The purported purpose behind this automation ranges from commercial motivations, such as advertising and algorithmic manipulation for profit, to more sinister governmental control mechanisms designed to influence public perception and behaviour[1].
The theory specifically identifies a transition period around 2016-2017 when this “death” allegedly occurred, marking a watershed moment in the internet’s evolution from human-dominated to machine-dominated[4].
The concept of a “dead” internet represents more than just the prevalence of automated content; it encompasses the notion that the internet’s original promise as a platform for genuine human connection and information exchange has been fundamentally compromised.
In this view, even when humans do create content or interact online, their experiences are so thoroughly mediated and manipulated by algorithms that authentic expression becomes nearly impossible[3].
The algorithmic filtering and curating of content creates echo chambers and bubbles that limit exposure to diverse perspectives, further contributing to the sense that the internet no longer serves its intended purpose of facilitating genuine human communication.
This component of the theory reflects legitimate concerns about how technology is reshaping our online experiences, even as it extends these concerns into more speculative territory[2].
Bots on the ground
A central element of the Dead Internet Theory is the pervasive presence of bots across digital platforms. These automated programs can generate content, engage with posts, and even conduct convincing conversations, creating the impression of human activity where none exists[1].
Proponents point to studies suggesting that nearly half of global internet traffic comes from bots, providing quantifiable evidence that non-human actors constitute a significant portion of online activity[3].
The 2016 US presidential election features prominently in discussions of bot influence, with both political sides accusing the other of using bots to spread misinformation and manipulate public opinion, marking a watershed moment in public awareness of bot activity[1].
The controversy surrounding Elon Musk’s acquisition of Twitter (now X) in 2022 further highlighted concerns about bot prevalence.
Musk temporarily halted his purchase, disputing Twitter’s claim that less than 5% of daily users were bots. His research teams estimated the actual figure at 11-13.7%, with these bot accounts responsible for a disproportionate amount of content generation on the platform[1].
A frequently cited example of suspected bot activity is the “I Hate Texting” phenomenon on X, where numerous posts following the same format (“I hate texting, I just want to hold your hand” or “I hate texting, just come live with me”) rapidly accumulated tens of thousands of likes[1].
The uniformity of these posts, combined with their unusual virality, led many to question whether they originated from genuine human users or were part of a coordinated bot campaign, exemplifying the type of suspicious content patterns that fuel the Dead Internet Theory[1].
The role that the other intelligence plays, artificial intelligence
The explosive advancement of artificial intelligence has significantly contributed to the growing belief in the Dead Internet Theory. Modern AI systems can now generate increasingly sophisticated content across multiple formats, including text, images, videos, and even interactive conversations that closely mimic human communication patterns[1].
Large language models like ChatGPT and Gemini have reached benchmarks that in some cases surpass human capabilities, making AI-generated content increasingly difficult to distinguish from human-created materials[1].
This technological evolution has created a situation where many users, particularly those with less technical knowledge, cannot reliably identify when they are interacting with AI-generated content versus human-created content[1].
The phenomenon of “Shrimp Jesus” and similar bizarre AI-generated images circulating on Facebook exemplifies this trend. These hyper-realistic images merging religious iconography with unexpected elements have garnered tens of thousands of likes and comments, demonstrating both the reach of AI-generated content and the engagement it can achieve[2].
Such content serves as “engagement bait,” designed to elicit reactions and shares, thereby boosting visibility of spam accounts or identifying users who might be susceptible to future scams or manipulation[1].
The sophistication of these AI systems continues to increase through deep learning techniques and continuous improvement, further blurring the line between authentic human expression and machine-generated simulation[1].
This growing indistinguishability between human and AI content forms a cornerstone of the Dead Internet Theory, as it suggests a future where the majority of online content we consume may have no human origin at all[3].
The trick that algorithm plays with curation in creating influence
The Dead Internet Theory emphasises the role of algorithms in fundamentally altering online experiences. Search engines, social media platforms, and content aggregators all employ sophisticated algorithms that determine what information users see, creating personalised but potentially distorted views of the digital world[1].
These algorithms are designed to maximise engagement and time spent on platforms, often prioritising content that provokes strong emotional responses regardless of its accuracy or value[3].
The theory suggests that these algorithmic systems are not merely sorting existing human-created content but increasingly working in tandem with automated content generation systems to create an entirely artificial online environment designed to manipulate user behaviour[4].
Algorithmic curation creates feedback loops where popular content becomes more visible, potentially amplifying artificial trends and bot-generated materials. This can lead to situations where genuine human voices become drowned out by the sheer volume of automated content, further reinforcing the perception of a “dead” internet[2].
The algorithmic promotion of content based on engagement metrics rather than quality or authenticity creates perverse incentives that reward controversial, extreme, or misleading content, distorting online discourse[1].
The coordinated interaction between content-generating AI, distribution algorithms, and engagement-measuring systems forms a self-reinforcing ecosystem that potentially marginalises authentic human participation, lending credence to the core claims of the Dead Internet Theory even as the more conspiratorial elements remain unproven[4].
Evidence: Strange AI-generated content phenomena
One of the most visible manifestations supporting the Dead Internet Theory is the proliferation of bizarre AI-generated content across social platforms. A prime example is the “Shrimp Jesus” phenomenon, where Facebook users encounter numerous AI-generated images combining crustaceans with traditional depictions of Jesus Christ[2]. These surreal images have accumulated substantial engagement, with some garnering over 20,000 likes and comments despite their obvious artificiality[2].
Similar trends include peculiar combinations of religious imagery with flight attendants or other incongruous elements, creating content that appears designed primarily to provoke reactions rather than communicate meaningful ideas[1]. These strange content patterns appear to exploit the human tendency to engage with unusual or provocative material, even when that material lacks coherent meaning or purpose[1].
The proliferation of these AI-generated images represents a new frontier in content creation, where the traditional limitations of human creativity and effort no longer apply.
AI systems can rapidly generate thousands of variations on a theme, testing different combinations to identify which elements trigger the highest engagement[2]. This mass-production approach to content creation fundamentally differs from traditional human expression, which typically involves intention, meaning, and contextualization[3].
The success of this artificially generated content in attracting engagement supports the theory’s contention that meaningful human interaction is being replaced by automated processes designed to manipulate attention rather than facilitate genuine communication[2].
The fact that these obviously artificial images can achieve such widespread distribution and engagement suggests that algorithmic promotion may indeed prioritise automated content over authentic human expression, lending credibility to central claims of the Dead Internet Theory[1].
How much of your internet traffic is from bots and how do they engage?
Statistical evidence regarding bot activity provides some of the most compelling support for elements of the Dead Internet Theory.
Research indicates that nearly half of all global internet traffic may come from bots, suggesting non-human actors constitute a significant proportion of online activity[3]. This quantifiable data points to a digital landscape where human users represent only a fraction of total internet participants, aligning with the theory’s core premise[4].
During Elon Musk’s acquisition of Twitter, research teams identified that bot accounts constituted between 11-13.7% of users but were responsible for a disproportionately large percentage of content creation, demonstrating how automated systems can dominate online discourse despite representing a minority of accounts[1].
Suspicious engagement patterns further reinforce concerns about non-human activity online. The “I Hate Texting” phenomenon on X showcased how posts following identical formats rapidly accumulated tens of thousands of likes, exhibiting virality patterns that appeared orchestrated rather than organic[1].
Similarly, the rapid spread of AI-generated images across Facebook displays engagement characteristics that seem artificially amplified, with identical content being shared across seemingly unrelated accounts simultaneously[1]. These coordinated patterns of content distribution and engagement create “artificial trends” that can manipulate algorithms, boosting visibility and creating the impression of popular interest where none may organically exist[2].
Such observations support the theory’s contention that much of what appears to be human activity online may actually represent sophisticated automation designed to manipulate metrics rather than reflect genuine human interest or participation[3].
How much of it is conspiracy theory and how much is technological reality?
While the Dead Internet Theory identifies legitimate technological trends, it’s crucial to separate these observable phenomena from more speculative conspiratorial elements. The increase in bot traffic, proliferation of AI-generated content, and algorithmic curation of online experiences are well-documented technological developments[4].
However, the theory extends beyond these verifiable facts to suggest a coordinated conspiracy, potentially involving government agencies or powerful corporations, deliberately orchestrating these changes to control populations and minimize authentic human activity[4]. This conspiratorial framing lacks substantial evidence and represents a significant leap beyond what current research supports[4].
The theory’s more extreme claims posit that the internet “died” around 2016-2017, after which human activity was supposedly overwhelmed by artificial systems[1].
While technological changes certainly accelerated during this period, with advances in AI and algorithmic systems, the timeline represents more of a gradual evolution than a sudden “death”[3]. The assertion that state actors or coordinated entities deliberately engineered this transition to manipulate public perception lacks compelling evidence, despite legitimate concerns about information manipulation online[4].
A more balanced assessment acknowledges that while technological systems increasingly mediate our online experiences, and while these systems can indeed be exploited for manipulation, the internet continues to host substantial genuine human activity alongside automated content[2].
What should you expect to see in the future?
The Dead Internet Theory, despite its more speculative elements, highlights important questions about the future of online interaction. As AI technologies continue to advance, the line between human and machine-generated content will likely become increasingly blurred[1]. This technological trajectory raises fundamental questions about authenticity, trust, and the nature of online communication[3].
The growing sophistication of generative AI could potentially create a digital environment where determining the origin of content—whether human or machine—becomes practically impossible for the average user[1]. This convergence may necessitate new approaches to digital literacy and content verification to help users navigate increasingly complex information ecosystems[2].
The proliferation of automated systems also raises concerns about the concentration of power in the hands of those who control these technologies[4]. As algorithms increasingly determine what content reaches users, the entities designing and deploying these systems wield tremendous influence over public discourse and information access[3].
This algorithmic gatekeeping could potentially undermine the internet’s original promise as a democratising force for information sharing and communication[3]. While the Dead Internet Theory may overstate the current dominance of automated systems, it correctly identifies a trend toward increased mediation of online experiences through technological systems that prioritise engagement metrics over authentic human connection[1]. Addressing these challenges will require thoughtful technological development, regulatory frameworks, and enhanced digital literacy to preserve meaningful human interaction in increasingly automated digital spaces[2].
So, what you get on the internet?
The Dead Internet Theory reflects both legitimate technological developments and more speculative conspiratorial thinking. While evidence supports certain elements of the theory—including the rise of bot activity, the proliferation of AI-generated content, and the increasing influence of algorithmic curation—the comprehensive conspiracy narrative lacks substantial supporting evidence. The internet today does indeed feature significant non-human activity and automated content generation, but characterising it as “dead” overlooks the continuing presence of authentic human engagement and creativity online.
As we navigate an increasingly complex digital landscape, developing enhanced critical thinking skills and digital literacy becomes essential. The ability to distinguish between authentic human content and artificial materials, to recognise manipulation attempts, and to seek out genuine connection will define our online experiences moving forward. While the Dead Internet Theory may overstate the current dominance of automated systems, it correctly identifies trends that could fundamentally reshape our relationship with digital spaces. By acknowledging these technological developments while maintaining a critical perspective on more extreme claims, we can better understand and potentially influence the ongoing evolution of our shared online environment.
Sources:
[1] Dead Internet Theory: How Bots Haunt the Internet https://em360tech.com/tech-article/dead-internet-theory
[2] What is ‘the dead internet theory’ and why is it so sinister? https://www.sbs.com.au/news/article/what-is-the-dead-internet-theory-and-why-is-it-so-sinister/bsfvpw810
[3] What is the Dead Internet Theory? A Human Free Zone https://www.webopedia.com/technology/what-is-the-dead-internet-theory/
[4] Dead Internet theory – Wikipedia https://en.wikipedia.org/wiki/Dead_Internet_theory
[5] Echoes of the dead internet theory: AI’s silent takeover | Cybernews https://cybernews.com/editorial/dead-internet-theory-ai-silent-takeover/
[6] What Is the Dead Internet Theory? https://www.howtogeek.com/what-is-the-dead-internet-theory/
[7] Is the ‘Internet Death’ theory that the Internet is ruled by AI and bots true? https://gigazine.net/gsc_news/en/20240522-dead-internet-theory-ai-bot-web/
[8] The ‘dead internet theory’ makes eerie claims about an AI-run web. The truth is more sinister https://www.unsw.edu.au/newsroom/news/2024/05/-the-dead-internet-theory-makes-eerie-claims-about-an-ai-run-web-the-truth-is-more-sinister
[9] The dead internet theory: The rise of bot-to-bot interactions https://www.verdict.co.uk/bots-fake-news-internet-traffic/
[10] What is the dead internet theory? https://theweek.com/media/what-is-the-dead-internet-theory
[11] Is anyone out there? https://www.prospectmagazine.co.uk/ideas/technology/internet/67864/dead-internet-theory-ai
[12] The dead internet theory and the silent surge of bots https://www.wix.com/blog/dead-internet-theory
[13] What is Dead Internet Theory and Could it Change the Web? https://tech.co/news/what-is-dead-internet-theory
[14] What is the dead internet conspiracy? https://www.livescience.com/technology/artificial-intelligence/what-is-the-dead-internet-conspiracy
[15] Maybe You Missed It, but the Internet ‘Died’ Five Years Ago https://www.theatlantic.com/technology/archive/2021/08/dead-internet-theory-wrong-but-feels-true/619937/