Ask ChatGPT’s default and premium fashions the identical query, and so they’ll cite virtually solely completely different sources, in response to a Writesonic evaluation.
GPT-5.4 Pondering, ChatGPT’s premium mannequin, despatched 56% of its citations to model web sites. GPT-5.3 Prompt, the default for all logged-in ChatGPT customers, despatched 8%.
Throughout all prompts, the 2 fashions shared solely 7% of their cited sources. The rationale comes all the way down to how every mannequin searches the net earlier than answering.
Identical Query, Totally different Search Technique
When fashions have been requested about CRM software program, GPT-5.3 despatched one broad question and cited techradar.com and designrevision.com. GPT-5.4 despatched separate queries restricted to hubspot.com, salesforce.com, and attio.com for pricing, then checked g2.com and capterra.com for critiques.
GPT-5.4 averaged 8.5 sub-queries, lots of them restricted to particular domains, and used website: operators in 156 of its 423 complete queries. No different ChatGPT mannequin examined used website: operators in any respect.
OpenAI’s documentation says ChatGPT search rewrites prompts, however doesn’t observe how fashions resolve which domains to focus on or when to make use of website: operators.
The place The Citations Land
GPT-5.3 leaned closely on third-party content material. Weblog posts and articles made up 32% of its citations, with Forbes (15 citations), TechRadar (10), and Tom’s Information (10) as the highest domains.
GPT-5.4 went the opposite route. Model homepages accounted for 22% of citations, pricing pages 19%, and product pages 10%.
GPT-5.3 cited 4 pricing pages throughout all 49 conversations that triggered internet search. GPT-5.4 cited 138. For manufacturers that gate pricing behind a “contact gross sales” web page, this might imply GPT-5.4 has much less to work with when answering comparability queries.
On head-to-head comparability prompts like “HubSpot vs Salesforce vs Pipedrive,” GPT-5.3 by no means cited a model web site. GPT-5.4 cited manufacturers 83% to 100% of the time on those self same prompts.
How This Connects To Search Rankings
Writesonic used SerpAPI to examine whether or not cited domains additionally appeared in Google and Bing outcomes for a similar question.
For GPT-5.3, 47% of cited domains additionally appeared in Google outcomes. The overlap means that Google rankings are no less than partially predictive for the default mannequin.
For GPT-5.4, 75% of cited domains didn’t seem in Google or Bing outcomes for a similar consumer immediate. That implies GPT-5.4 could rely much less on conventional search rankings and extra on focused area queries, although that hasn’t been independently verified.
Why This Issues
Model visibility in ChatGPT could depend upon which mannequin a consumer is operating.
For the default mannequin, third-party protection on evaluation websites and media shops seems to drive citations. For the premium mannequin, first-party content material, notably pricing and product pages, seems to matter extra.
Trying Forward
As ChatGPT continues rolling out new fashions, the patterns recognized right here could change.
Most cited URLs within the check pattern included utm_source=chatgpt.com, giving manufacturers a method to measure referral site visitors straight in analytics.
