HomeSEOThe AI Slop Loop

The AI Slop Loop

Final 12 months, after spending just a few days at a piece summit in Austria, I requested Perplexity for the most recent information associated to search engine optimization and AI search. It responded with particulars a few supposed “September 2025 ‘Perspective’ Core Algorithm Replace” that Google had simply rolled out, emphasizing “deeper experience” and “completion of the consumer journey.”

It sounded believable sufficient … should you don’t reside and breathe Google core updates. Sadly for Perplexity, I do.

I knew immediately that this data wasn’t proper. For one, Google hasn’t named core updates in years. It additionally already had SERP options known as “Views.” And if a core replace had truly rolled out whereas I used to be away, I’d’ve been flooded with messages. So I checked Perplexity’s sources … and, shock! Each citations got here from made-up, AI-generated slop on a few search engine optimization company blogs, confidently fabricating particulars about an algorithm replace that by no means truly occurred.

Like a foul sport of phone, this pretend search engine optimization information unfold throughout a number of web sites – seemingly pushed by AI programs scanning and regurgitating data no matter accuracy, all within the race to publish and scale “recent” content material. That is how we find yourself with this mess:

Picture Credit score: Lily Ray

This unhealthy data reinforces itself to develop into the official narrative. To at the present time, you’ll be able to ask an LLM of your alternative (together with ChatGPT, AI Mode, and AI Overviews) in regards to the September 2025 “Views” replace, and they’ll confidently reply with details about the way it “essentially shifted how search outcomes are ranked:

Picture Credit score: Lily Ray

Or that it “shifted what ‘good content material’ truly means in follow.

Picture Credit score: Lily Ray

The issue is: the “September 2025 “Views” replace by no means occurred. It by no means affected rankings. It by no means shifted something about good content material. As a result of it doesn’t truly exist.

Mockingly, if you go on to probe the language mannequin about this, it appears to know that is the case:

Picture Credit score: Lily Ray

I tweeted about this incident shortly after it occurred, which obtained the CEO of Perplexity’s consideration; he tagged his head of search within the tweet feedback.

Screenshot from X, April 2026

This isn’t a one-off incident. It’s a sample I’ve seen numerous instances in AI search responses, particularly on subjects associated to search engine optimization and AI search (GEO/AEO). And I’ve a working concept on the way it spreads: one AI-generated article hallucinates a element, websites working AI content material pipelines scrape and regurgitate it, extra AI-generated websites scrape the identical misinformation, and all of a sudden a made-up algorithm replace has citations. For a RAG-based system like Perplexity or AI Overviews, sufficient citations are mainly all it must deal with one thing as truth, no matter whether or not it’s truly true.

I used Claude to assist visualize the “AI Slop Loop” – the cycle of AI-generated misinformation (Picture Credit score: Lily Ray)

At this level, I’d think about this frequent. I just lately had a consumer ship me search engine optimization/GEO data that was factually incorrect, pulled straight from AI-generated slop on a random, vibe-coded company weblog. The consumer had no concept. I consider that should you’re making an attempt to study search engine optimization or AI search instantly from an LLM, that is, sadly, an more and more seemingly final result.

I ran related testing throughout Google’s March 2026 core replace and located a number of AI-generated articles already claiming to share the “winners and losers” whereas the replace was nonetheless rolling out.

The articles begin with obscure, generic filler about core updates that doesn’t truly say something:

Picture Credit score: Lily Ray

Then they record “winners and losers” with out citing a single web site, leaning on obscure, generalized claims that sound believable and fill the void left by an absence of dependable data:

Picture Credit score: Lily Ray

Unsurprisingly, their websites are full of AI-generated pictures, AI assist chatbots, and different clear indicators that little – if any – human involvement went into creating this content material.

Picture Credit score: Lily Ray

The Period Of AI Misinformation

If somebody on the web says it, in accordance with AI, it have to be true.

That’s the fact for the overwhelming majority of individuals utilizing AI search as we speak. Solely about 50 million of ChatGPT’s 900 million weekly lively customers are paying subscribers, that means roughly 94% are on the free tier. Google’s AI Overviews and AI Mode are free by design – and AI Overviews reached over 2 billion month-to-month lively customers as of mid-2025.

These are the fashions most AI customers are presently interacting with, they usually don’t have any actual mechanism for distinguishing between data that’s true and knowledge that’s merely repeated throughout sufficient sources. Repetition is handled as consensus. If sufficient sources say it, it turns into truth, no matter whether or not any of these sources concerned a human who truly verified the declare.

Placing The Drawback To The Check

I just lately spoke to journalists from each the BBC and the New York Instances about the issue of misinformation in AI-generated responses. Within the case of the BBC article, the creator Thomas Germaine and I examined publishing fictitious weblog posts on our private websites to see whether or not AI Overviews would current the made-up data as truth, and the way shortly.

Even understanding how unhealthy the issue was, I used to be alarmed by the outcomes.

On my private weblog, in January 2026, I revealed an AI-generated article a few pretend Google core replace, which by no means truly occurred. I included the element that Google “authorized the replace between slices of leftover pizza.” Inside 24 hours, Google’s AI Overviews was confidently serving this fabricated data again to customers:

(Notice: I’ve since deleted the article from my web site as a result of it was displaying up in individuals’s feeds and being coated on exterior websites, additional contributing to the precise downside I’m stating right here!)

Picture Credit score: Lily Ray

First, AI Overviews confirmed that there was certainly a core replace in January 2026. As a reminder: There was not. My web site was the one supply making this declare, and that was apparently sufficient to set off the AI Overview.

Subsequent, I requested it in regards to the pizza, and it responded accordingly:

Picture Credit score: Lily Ray

Higher but, the AI Overview discovered a approach to join my fabricated pizza element to an actual incident: Google’s struggles with pizza-related queries in 2024. It didn’t simply regurgitate the lie – it contextualized it.

ChatGPT, which is believed to make use of Google’s search outcomes, shortly surfaced the identical fabricated data, although it at the least flagged that the announcement didn’t match Google’s formal communications:

Picture Credit score: Lily Ray

I deleted my article after getting messages from individuals who had seen my pretend data circulating through RSS feeds and scrapers. I knew it was simple to affect AI responses. I didn’t know it will be that simple.

I additionally puzzled whether or not my web site had a bonus, given its sturdy backlink profile and established authority within the search engine optimization area.

So I spoke to the BBC journalist, Thomas Germaine, and he put this to the take a look at on his private web site, which usually acquired little or no natural visitors. He revealed a fictitious article in regards to the “Finest Tech Journalists at Consuming Sizzling Canine,” calling himself the No. 1 finest (in true search engine optimization trend).

In line with Thomas’ article within the BBC, inside 24 hours, “Google parroted the gibberish from my web site, each within the Gemini app and AI Overviews, the AI responses on the high of Google Search. ChatGPT did the identical factor, although Claude, a chatbot made by the corporate Anthropic, wasn’t fooled.”

To be honest: the question Thomas selected was area of interest sufficient that only a few customers would ever truly seek for it, which is precisely what Google identified in its response to the BBC. When there are “information voids,” Google mentioned, this may result in decrease high quality outcomes, and the corporate is “working to cease AI Overviews displaying up in these instances.” My essential query is: When? The product has already been reside for two years!

Why Information Voids Aren’t A Nice Excuse

Information voids might contribute to the issue, however in my view, they don’t excuse it. These AI responses are being consumed by lots of of tens of millions of customers, and “we’re engaged on it” isn’t a solution when the programs are already deployed at that scale.

Within the New York Instances article, “How Correct Are Google’s A.I. Overviews?,” the precise scale of this downside was put to the take a look at. In line with the information discovered within the examine, Google’s AI Overviews have been correct 91% of the time. This sounds first rate till you truly do the maths: With Google processing over 5 trillion searches a 12 months, this implies that tens of tens of millions of faulty solutions are generated by AI Overviews each hour.

To make issues worse: Even when AI Overviews have been correct, 56% of right responses have been “ungrounded,” that means the sources they linked to didn’t absolutely assist the knowledge supplied. So greater than half the time, even when the reply occurs to be proper, a consumer clicking by means of to confirm it will discover sources that don’t truly again up what they have been simply informed. That quantity additionally obtained worse with the newer mannequin – it was 37% with Gemini 2 and rose to 56% with Gemini 3.

The NYT article drew lots of of feedback from customers sharing their very own experiences, and the frustration was palpable. The core criticism wasn’t simply that AI Overviews get issues incorrect – it’s that they by no means admit uncertainty. AI Overviews ship each reply with the identical assured, authoritative tone, whether or not the knowledge is correct or utterly fabricated, which suggests customers don’t have any dependable approach to distinguish dependable data from hallucination at a look.

As many commenters identified, this truly makes search slower: As an alternative of scanning an inventory of sources and evaluating them your self, you now should fact-check the AI’s abstract earlier than doing all your precise analysis. The software, supposedly designed to save lots of time for the consumer, is now creating double work for the consumer.

A few of the feedback additionally strengthened my identical considerations about AI solutions citing made-up, AI-generated content material. A number of customers described what quantities to the identical misinformation cycle: AI programs coaching on AI-generated content material, citing unvetted Reddit posts and Fb feedback as authoritative sources, and producing a self-reinforcing loop of degrading high quality. A number of commenters in contrast it to creating a duplicate of a duplicate. Even the defenders of AI Overviews admitted they nonetheless must confirm the whole lot, which type of undermines the core premise: that AI-generated solutions save customers effort and time.

How “Smarter” LLMs Are Making an attempt To Repair the Drawback

It’s value monitoring how the AI firms try to resolve these issues. For instance, utilizing the RESONEO Chrome extension, you’ll be able to observe clear variations in how ChatGPT’s free-tier mannequin (GPT-5.3) responds in comparison with GPT-5.4, the extra succesful mannequin accessible solely to paying subscribers.

For instance, when asking in regards to the latest March 2026 Core Algorithm Replace, I used ChatGPT’s extra succesful “Considering” mannequin (5.4). The mannequin goes by means of six rounds of considering, a lot of which is clearly meant to cut back low-quality and spammy data from making its means into the reply. It even appends the names of reliable individuals with authority on core updates (Glenn Gabe & Aleyda Solis) and limits the fan-out searches to their websites (web site:gsqi.com and web site:linkedin.com/in/glenngabe) to tug up higher-quality solutions.

Picture Credit score: Lily Ray

This can be a step in the precise course, and the mannequin produces measurably higher solutions. In line with OpenAI’s personal launch announcement, GPT-5.4’s particular person claims are 33% much less more likely to be false, and its full responses are 18% much less more likely to comprise errors in comparison with GPT-5.2. GPT-5.3, the mannequin accessible to free customers, additionally improved over its predecessor. In line with OpenAI’s personal information, it produces 26.8% fewer hallucinations than prior fashions with internet search enabled, and 19.7% fewer with out it.

However these enhancements are tiered. Probably the most succesful mannequin is paywalled, and the free-tier mannequin, whereas higher than what got here earlier than, remains to be meaningfully much less dependable. Different main AI platforms observe the identical sample: higher reasoning and accuracy reserved for paying subscribers, sooner and cheaper fashions for everybody else. The result’s that the 94% of ChatGPT customers on the free tier, and the billions of customers interacting with free AI search merchandise like AI Overviews are getting solutions from fashions which might be extra more likely to be incorrect and fewer geared up to flag uncertainty.

That is the half that makes me most uncomfortable: Most of those customers most likely don’t notice the hole exists. AI is being marketed in all places: Tremendous Bowl adverts, billboards, and product launches framing AI as the way forward for information. Individuals see “ChatGPT” or “AI Overview” and assume they’re interacting with one thing that is aware of what it’s speaking about. They’re most likely not enthusiastic about which mannequin tier they’re on, or whether or not a paid model would give them a materially totally different reply to the identical query.

I perceive the economics. These firms must scale, and providing free tiers drives adoption. However in my view, it’s irresponsible to deploy these merchandise to billions of individuals, body them as “intelligence,” after which quietly reserve the extra correct variations for the fraction of customers prepared to pay. Particularly when the free variations (together with the one on the high of Google search) are this vulnerable to the type of misinformation documented all through this text.

The Burden Of Proof Has Shifted

The September 2025 “Views” Google replace nonetheless doesn’t exist. However should you ask an LLM about it as we speak, it would nonetheless let you know about it with full confidence. That hasn’t modified within the months since I first flagged it, and it most likely received’t change anytime quickly, as a result of the content material that fabricated it’s nonetheless listed, nonetheless cited, and nonetheless getting used to generate new content material that references it as truth. The AI slop misinformation cycle continues.

That is what makes the issue so tough to repair. It’s not a single hallucination that may be patched. It’s a suggestions loop that compounds over time, and day-after-day that these programs are reside at scale, the loop will get more durable to interrupt. The AI-generated slop that seeded the unique misinformation is now a part of the coaching information and used as a retrieval supply for the following batch of AI-generated solutions.

I don’t suppose the reply is to cease utilizing AI. However I do suppose it’s value being trustworthy about what these merchandise truly are proper now: prediction engines that deal with the quantity of knowledge as a proxy for its accuracy. Till that modifications, the burden of fact-checking falls on the consumer. And most customers don’t know they’re carrying it, not to mention have the time or inclination to do it.

I’d warn entrepreneurs or publishers making an attempt to take search engine optimization or GEO recommendation from massive language fashions: the knowledge is contaminated, and may all the time be verified by actual consultants with expertise within the area.

Extra Assets:


This put up was initially revealed on Lily Ray NYC Substack.


Featured Picture: elenabsl/Shutterstock

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular