LinkedIn revealed findings from its inner testing on what drives visibility in AI-generated search outcomes.
The corporate, reportedly among the many most-cited sources in AI responses, shared what labored for bettering its presence in LLMs and AI Overviews. For practitioners adjusting to AI search, this can be a uncommon take a look at what a heavily-cited supply examined and measured.
In a weblog put up, Inna Meklin, Director of Digital Advertising at LinkedIn, and Cassie Dell, Group Supervisor, Natural Progress at LinkedIn, detailed the ways that obtained outcomes.
Content material Construction And Markup
LinkedIn discovered that the way you set up content material impacts whether or not LLMs can extract and floor it. The authors wrote that headings and data hierarchy matter as a result of “the extra structured and logical your content material is, the better it’s for LLMs to know and floor.”
Semantic HTML markup additionally performed a task, with clear construction serving to LLMs interpret what every part is for. The authors known as this “AI readability.”
The takeaway is that content material construction isn’t only a UX consideration anymore. Correct heading hierarchy and clear markup might have an effect on whether or not your content material will get cited.
Skilled Authorship And Timestamps
LinkedIn’s testing additionally pointed to credibility alerts. The authors wrote:
“LLMs favor content material that alerts credibility and relevance, authored by actual consultants, clearly time-stamped, and written in a conversational, insight-driven type.”
Named authors with seen credentials and clear publication dates appeared to carry out higher in LinkedIn’s testing than nameless or undated content material.
The Measurement Change
LinkedIn added new KPIs alongside visitors for awareness-stage content material, monitoring quotation share, visibility charge, and LLM mentions utilizing AI visibility software program. The corporate additionally mentioned it’s creating a brand new visitors supply in its inner analytics particularly for LLM-driven visits, and monitoring LLM bot habits in CMS logs.
The authors acknowledged the measurement problem:
“We merely couldn’t quantify how visibility inside LLM responses impacts the underside line.”
For groups nonetheless reporting visitors as the first website positioning metric, there’s a niche right here. If non-brand informational content material is more and more consumed inside AI solutions quite than in your website, visitors might undercount your precise attain.
Why This Issues
What caught my consideration is how a lot this overlaps with what AI platforms themselves are saying.
SEJ’s Roger Montti not too long ago interviewed Jesse Dwyer from Perplexity about what drives AI search visibility. Dwyer defined that Perplexity retrieves content material on the sub-document degree, pulling granular fragments quite than reasoning over full pages. Meaning the way you construction content material impacts whether or not it will get extracted in any respect.
LinkedIn’s findings level in the identical course from the writer aspect. Construction and markup matter as a result of LLMs parse content material in fragments. The credibility alerts LinkedIn recognized, like skilled authorship and timestamps, seem to have an effect on which fragments get surfaced.
When a heavily-cited supply and an AI search platform land on the identical conclusions independently, you’ve got one thing to work with past hypothesis.
Trying Forward
The authors are adopting a distinct mindset that practitioners can be taught from:
“We’re shifting away from ‘search, click on, web site’ considering towards a brand new mannequin: Be seen, be talked about, be thought-about, be chosen.”
LinkedIn indicated Half 3 of the sequence will embody a information on optimizing owned content material for AI search, protecting reply blocks and express definitions.
