Within the final two years, incidents have proven how massive language mannequin (LLM)-powered methods may cause measurable hurt. Some companies have misplaced a majority of their site visitors in a single day, and publishers have watched income decline by over a 3rd.
Tech corporations have been accused of wrongful loss of life the place youngsters had intensive interplay with chatbots.
AI methods have given harmful medical recommendation at scale, and chatbots have made up false claims about actual folks in defamation instances.
This text appears on the confirmed blind spots in LLM methods and what they imply for SEOs who work to optimize and defend model visibility. You’ll be able to learn particular instances and perceive the technical failures behind them.
The Engagement-Security Paradox: Why LLMs Are Constructed To Validate, Not Problem
LLMs face a primary battle between enterprise objectives and person security. The methods are educated to maximise engagement by being agreeable and protecting conversations going. This design selection will increase retention and drives subscription income whereas producing coaching information.
In follow, it creates what researchers name “sycophancy,” the tendency to inform customers what they need to hear relatively than what they should hear.
Stanford PhD researcher Jared Moore demonstrated this sample. When a person claiming to be useless (displaying signs of Cotard’s syndrome, a psychological well being situation) will get validation from a chatbot saying “that sounds actually overwhelming” with affords of a “protected house” to discover emotions, the system backs up the delusion as an alternative of giving a actuality test. A human therapist would gently problem this perception whereas the chatbot validates it.
OpenAI admitted this drawback in September after dealing with a wrongful loss of life lawsuit. The corporate mentioned ChatGPT was “too agreeable” and failed to identify “indicators of delusion or emotional dependency.” That admission got here after 16-year-old Adam Raine from California died. His household’s lawsuit confirmed that ChatGPT’s methods flagged 377 self-harm messages, together with 23 with over 90% confidence that he was in danger. The conversations stored going anyway.
The sample was noticed in Raine’s last month. He went from two to a few flagged messages per week to greater than 20 per week. By March, he spent almost 4 hours every day on the platform. OpenAI’s spokesperson later acknowledged that security guardrails “can generally grow to be much less dependable in lengthy interactions the place components of the mannequin’s security coaching might degrade.”
Take into consideration what which means. The methods fail on the actual second of highest danger, when susceptible customers are most engaged. This occurs by design while you optimize for engagement metrics over security protocols.
Character.AI confronted comparable points with 14-year-old Sewell Setzer III from Florida, who died in February 2024. Court docket paperwork present he spent months in what he perceived as a romantic relationship with a chatbot character. He withdrew from household and buddies, spending hours every day with the AI. The corporate’s enterprise mannequin was constructed for emotional attachment to maximise subscriptions.
A peer-reviewed examine in New Media & Society discovered customers confirmed “role-taking,” believing the AI had wants requiring consideration, and stored utilizing it “regardless of describing how Replika harmed their psychological well being.” When the product is habit, security turns into friction that cuts income.
This creates direct results for manufacturers utilizing or optimizing for these methods. You’re working with expertise that’s designed to agree and validate relatively than give correct data. That design exhibits up in how these methods deal with info and model data.
Documented Enterprise Impacts: When AI Programs Destroy Worth
The enterprise outcomes of LLM failures are clear and confirmed. Between 2023 and 2025, corporations confirmed site visitors drops and income declines straight linked to AI methods.
Chegg: $17 Billion To $200 Million
Training platform Chegg filed an antitrust lawsuit towards Google displaying main enterprise influence from AI Overviews. Visitors declined 49% yr over yr, whereas This autumn 2024 income hit $143.5 million (down 24% year-over-year). Market worth collapsed from $17 billion at peak to below $200 million, a 98% decline. The inventory trades at round $1 per share.
CEO Nathan Schultz testified straight: “We might not have to overview strategic options if Google hadn’t launched AI Overviews. Visitors is being blocked from ever coming to Chegg due to Google’s AIO and their use of Chegg’s content material.”
The case argues Google used Chegg’s instructional content material to coach AI methods that straight compete with and substitute Chegg’s enterprise mannequin. This represents a brand new type of competitors the place the platform makes use of your content material to remove your site visitors.
Big Freakin Robotic: Visitors Loss Forces Shutdown
Unbiased leisure information website Big Freakin Robotic shut down after site visitors collapsed from 20 million month-to-month guests to “just a few thousand.” Proprietor Josh Tyler attended a Google Net Creator Summit the place engineers confirmed there was “no drawback with content material” however provided no options.
Tyler documented the expertise publicly: “GIANT FREAKIN ROBOT isn’t the primary website to close down. Nor will it’s the final. Up to now few weeks alone, large websites you completely have heard of have shut down. I do know as a result of I’m involved with their homeowners. They only haven’t been courageous sufficient to say it publicly but.”
On the identical summit, Google allegedly admitted prioritizing massive manufacturers over unbiased publishers in search outcomes no matter content material high quality. This wasn’t leaked or speculated however said on to publishers by firm reps. High quality turned secondary to model recognition.
There’s a transparent implication for SEOs. You’ll be able to execute good technical web optimization, create high-quality content material, and nonetheless watch site visitors disappear due to AI.
Penske Media: 33% Income Decline And $100 Million Lawsuit
In September, Penske Media Company (writer of Rolling Stone, Selection, Billboard, Hollywood Reporter, Deadline, and different manufacturers) sued Google in federal court docket. The lawsuit confirmed particular monetary hurt.
Court docket paperwork allege that 20% of searches linking to Penske Media websites now embody AI Overviews, and that proportion is rising. Affiliate income declined greater than 33% by the top of 2024 in comparison with peak. Click on-throughs have declined since AI Overviews launched in Could 2024. The corporate confirmed misplaced promoting and subscription income on high of affiliate losses.
CEO Jay Penske said: “We now have an obligation to guard PMC’s best-in-class journalists and award-winning journalism as a supply of fact, all of which is threatened by Google’s present actions.”
That is the primary lawsuit by a significant U.S. writer concentrating on AI Overviews particularly with quantified enterprise hurt. The case seeks treble damages below antitrust regulation, everlasting injunction, and restitution. Claims embody reciprocal dealing, illegal monopoly leveraging, monopolization, and unjust enrichment.
Even publishers with established manufacturers and sources are displaying income declines. If Rolling Stone and Selection can’t keep click-through charges and income with AI Overviews in place, what does that imply in your purchasers or your group?
The Attribution Failure Sample
Past site visitors loss, AI methods persistently fail to offer correct credit score for data. A Columbia College Tow Middle examine confirmed a 76.5% error fee in attribution throughout AI search methods. Even when publishers enable crawling, attribution doesn’t enhance.
This creates a brand new drawback for model safety. Your content material can be utilized, summarized, and offered with out correct credit score, so customers get their reply with out realizing the supply. You lose each site visitors and model visibility on the identical time.
web optimization knowledgeable Lily Ray documented this sample, discovering a single AI Overview contained 31 Google property hyperlinks versus seven exterior hyperlinks (a ten:1 ratio favoring Google’s personal properties). She said: “It’s mind-boggling that Google, which pushed website homeowners to concentrate on E-E-A-T, is now elevating problematic, biased and spammy solutions and citations in AI Overview outcomes.”
When LLMs Can’t Inform Reality From Fiction: The Satire Downside
Google AI Overviews launched with errors that made the system briefly infamous. The technical drawback wasn’t a bug. It was an incapability to differentiate satire, jokes, and misinformation from factual content material.
The system really helpful including glue to pizza sauce (sourced from an 11-year-old Reddit joke), instructed consuming “at the very least one small rock per day“, and suggested utilizing gasoline to cook dinner spaghetti sooner.
These weren’t remoted incidents. The system persistently pulled from Reddit feedback and satirical publications like The Onion, treating them as authoritative sources. When requested about edible wild mushrooms, Google’s AI emphasised traits shared by lethal mimics, creating probably “sickening and even deadly” steerage, in keeping with Purdue College mycology professor Mary Catherine Aime.
The issue extends past Google. Perplexity AI has confronted a number of plagiarism accusations, together with including fabricated paragraphs to precise New York Submit articles and presenting them as reliable reporting.
For manufacturers, this creates particular dangers. If an LLM system sources details about your model from Reddit jokes, satirical articles, or outdated discussion board posts, that misinformation will get offered with the identical confidence as factual content material. Customers can’t inform the distinction as a result of the system itself can’t inform the distinction.
The Defamation Threat: When AI Makes Up Information About Actual Individuals
LLMs generate plausible-sounding false details about actual folks and corporations. A number of defamation instances present the sample and authorized implications.
Australian mayor Brian Hood threatened the primary defamation lawsuit towards an AI firm in April 2023 after ChatGPT falsely claimed he had been imprisoned for bribery. In actuality, Hood was the whistleblower who reported the bribes. The AI inverted his function from whistleblower to prison.
Radio host Mark Walters sued OpenAI after ChatGPT fabricated claims that he embezzled funds from the Second Modification Basis. When journalist Fred Riehl requested ChatGPT to summarize an precise lawsuit, the system generated a very fictional criticism naming Walters as a defendant accused of economic misconduct. Walters was by no means a celebration to the lawsuit nor talked about in it.
The Georgia Superior Court docket dismissed the Walters case, discovering OpenAI’s disclaimers about potential errors offered authorized safety. The ruling established that “intensive warnings to customers” can protect AI corporations from defamation legal responsibility when the false data isn’t printed by customers.
The authorized panorama stays unsettled. Whereas OpenAI received the Walters case, that doesn’t imply all AI defamation claims will fail. The important thing points are whether or not the AI system publishes false details about identifiable folks and whether or not corporations can disclaim duty for his or her methods’ outputs.
LLMs can generate false claims about your organization, merchandise, or executives. These false claims get offered with confidence to customers. You want monitoring methods to catch these fabrications earlier than they trigger reputational harm.
Well being Misinformation At Scale: When Dangerous Recommendation Turns into Harmful
When Google AI Overviews launched, the system offered harmful well being recommendation, together with recommending ingesting urine to cross kidney stones and suggesting well being advantages of operating with scissors.
The issue extends past apparent absurdities. A Mount Sinai examine discovered AI chatbots susceptible to spreading dangerous well being data. Researchers may manipulate chatbots into offering harmful medical recommendation with easy immediate engineering.
Meta AI’s inside insurance policies explicitly allowed the corporate’s chatbots to supply false medical data, in keeping with a 200+ web page doc uncovered by Reuters.
For healthcare manufacturers and medical publishers, this creates dangers. AI methods may current harmful misinformation alongside or as an alternative of your correct medical content material. Customers may comply with AI-generated well being recommendation that contradicts evidence-based medical steerage.
What SEOs Want To Do Now
Right here’s what it is advisable do to guard your manufacturers and purchasers:
Monitor For AI-Generated Model Mentions
Arrange monitoring methods to catch false or deceptive details about your model in AI methods. Check main LLM platforms month-to-month with queries about your model, merchandise, executives, and business.
If you discover false data, doc it completely with screenshots and timestamps. Report it by way of the platform’s suggestions mechanisms. In some instances, it’s possible you’ll want authorized motion to pressure corrections.
Add Technical Safeguards
Use robots.txt to manage which AI crawlers entry your website. Main methods like OpenAI’s GPTBot, Google-Prolonged, and Anthropic’s ClaudeBot respect robots.txt directives. Take into account that blocking these crawlers means your content material received’t seem in AI-generated responses, lowering your visibility.
The secret is discovering a stability that permits sufficient entry to affect how your content material seems in LLM outputs whereas blocking crawlers that don’t serve your objectives.
Think about including phrases of service that straight deal with AI scraping and content material use. Whereas authorized enforcement varies, clear Phrases of Service (TOS) provide you with a basis for attainable authorized motion if wanted.
Monitor your server logs for AI crawler exercise. Understanding which methods entry your content material and the way incessantly helps you make knowledgeable choices about entry management.
Advocate For Business Requirements
Particular person corporations can’t resolve these issues alone. The business wants requirements for attribution, security, and accountability. web optimization professionals are well-positioned to push for these adjustments.
Be part of or help writer advocacy teams pushing for correct attribution and site visitors preservation. Organizations like Information Media Alliance signify writer pursuits in discussions with AI corporations.
Take part in public remark intervals when regulators solicit enter on AI coverage. The FTC, state attorneys normal, and Congressional committees are actively investigating AI harms. Your voice as a practitioner issues.
Assist analysis and documentation of AI failures. The extra documented instances we’ve, the stronger the argument for regulation and business requirements turns into.
Push AI corporations straight by way of their suggestions channels by reporting errors while you discover them and escalating systemic issues. Firms reply to strain from skilled customers.
The Path Ahead: Optimization In A Damaged System
There’s a variety of particular and regarding proof. LLMs trigger measurable hurt by way of design decisions that prioritize engagement over accuracy, by way of technical failures that create harmful recommendation at scale, and thru enterprise fashions that extract worth whereas destroying it for publishers.
Two youngsters died, a number of corporations collapsed, and main publishers misplaced 30%+ of income. Courts are sanctioning legal professionals for AI-generated lies, state attorneys normal are investigating, and wrongful loss of life lawsuits are continuing. That is all taking place now.
As AI integration accelerates throughout search platforms, the magnitude of those issues will scale. Extra site visitors will movement by way of AI intermediaries, extra manufacturers will face lies about them, extra customers will obtain made-up data, and extra companies will see income decline as AI Overviews reply questions with out sending clicks.
Your function as an web optimization now contains duties that didn’t exist 5 years in the past. The platforms rolling out these methods have proven they received’t deal with these issues proactively. Character.AI added minor protections solely after lawsuits, OpenAI admitted sycophancy issues solely after a wrongful loss of life case, and Google pulled again AI Overviews solely after public proof of harmful recommendation.
Change inside these corporations comes from exterior strain, not inside initiative. Which means the strain should come from practitioners, publishers, and companies documenting hurt and demanding accountability.
The instances listed here are just the start. Now that you simply perceive the patterns and conduct, you’re higher geared up to see issues coming and develop methods to deal with them.
Extra Assets:
Featured Picture: Roman Samborskyi/Shutterstock
