GPT-5.4 Changed Who Gets Cited in ChatGPT — And Most Brands Are Optimizing for the Wrong Model

Key Takeaways

  • GPT-5.4 (premium) cites brand websites 56% of the time. GPT-5.3 (free default) cites them just 8% of the time — a 7x gap.

  • The two models share only 7% of their citations on average. Optimizing for one gives you zero advantage on the other.

  • GPT-5.4 doesn't discover you through search — it already knows your brand from training data, then visits your site directly using site: operators.

  • 75% of GPT-5.4's cited domains don't appear in Google or Bing results for the same query. Google rank is largely irrelevant to premium-tier citation.

  • ChatGPT, Claude, and Gemini execute zero JavaScript. If your pricing page loads via client-side React, AI can't read it.

Quick context

What’s a GPT?

GPT stands for Generative Pre-trained Transformer — the type of AI model that powers ChatGPT. The number (5.3, 5.4) refers to the specific version of the model running under the hood.

GPT-5.4 vs ChatGPT

ChatGPT is the product. GPT-5.4 is the model inside it. It’s the same distinction as Google Chrome vs the V8 engine running it — one is what you use, one is what makes it work.

Why it matters here

ChatGPT runs different models depending on whether you’re on the free or paid tier. GPT-5.3 is the default. GPT-5.4 is the premium thinking model — and they search the web in completely different ways.


I came across an interesting new study this week — and it has some real ramifications for who is showing up in ChatGPT.

A Writesonic study analyzed 1,161 ChatGPT citations across 50 prompts and 119 conversations, and here’s what it found:

The ChatGPT model your customers use for purchasing decisions cites brand websites completely differently from the one most people open by default.

  • GPT-5.4 (the premium thinking model) cites brand websites 56% of the time.

  • GPT-5.3 (the default free model) cites them just 8% of the time.

It’s just a speed thing, right? Not exactly.

These two models clearly have some big differences in how they operate. It’s like two different citation universes running in parallel.

This is the two-model problem. If your GEO strategy doesn't account for it, you're probably invisible on the model that matters most for high-intent, purchase-stage research.

GEO Insight  ·  Citation Study, March 2026

GPT-5.4 cites brand websites 56% of the time. GPT-5.3, the free default model, cites them just 8% of the time — a 7x gap between models that run on the same search index, answering the same questions.

GPT-5.4  ·  Premium (thinking model) 56%
GPT-5.3  ·  Default (free model) 8%

What did the study on GPT-5.4 say about AI citations?

A March 2026 Writesonic study of 1,161 ChatGPT citations found that GPT-5.4 cites brand websites at 7x the rate of GPT-5.3, and the two models share only 7% of their citations on average.


The study — conducted by Samanyou Garg of Writesonic in March 2026 — looked at 119 real ChatGPT conversations, 50 unique prompts across 16 categories, and 7,896 web search results.

When it comes to AI search and citation research, this is one of the most rigorous first-party citation datasets published so far.

These three results stand out to me at first glance:

ChatGPT citation behavior by model  ·  1,161 citations analyzed

Model Brand (1st-party) 3rd-party
GPT-5.2 Instant Old default
22%
78%
GPT-5.3 Instant New default (free)
8%
92%
GPT-5.4 Thinking Premium
56%
44%

The first finding: GPT-5.4 cites brand websites at seven times the rate of GPT-5.3.

The second finding is that GPT-5.3 is actually worse for brands than the old model it replaced.

GPT-5.2 cited brand sites 22% of the time. The new default dropped that to 8%. If your ChatGPT-driven traffic has quietly declined this year, this is part of why.

The third finding changes how you have to think about strategy entirely.

Across 50 prompts, the average citation overlap between GPT-5.3 and GPT-5.4 was just 7%. On 22 of those 50 prompts, the overlap was exactly zero — the two models cited completely different sources for the same question.

The results are pretty interesting — but does the broader SEO community agree?

The reaction backed it up.

SEO expert Chris Long said GPT-5.4's search behavior changes were 'monumental’, and even Neil Patel started pushing the the 56% brand citation stat to his audience — so you know it’s rocking a few boats.

7% overlap

Average citation overlap between GPT-5.3 and GPT-5.4 for the same prompt. On 22 of 50 prompts, zero sources appeared in both models’ results.

GPT-5.3 vs GPT-5.4 50 prompts tested 22 prompts: 0% overlap

Source: Writesonic ChatGPT Citation Study, March 2026

Why does GPT-5.4 search different from other models?

GPT-5.4 decomposes a single prompt into an average of 8.5 sub-queries and uses site: operators to query brand domains directly — a search behavior no previous ChatGPT model used at all.

GPT-5.3 sends roughly one broad query to the search index and surfaces the top results. GPT-5.4 does something fundamentally different.

It decomposes your prompt into 8.5 sub-queries on average — targeted, specific searches including direct site: operator queries against brand domains.

In essence, when you ask GPT-5.4 a question, it doesn't just run one search. It breaks your prompt into roughly 8–9 focused sub-searches — and some of those go straight to specific brand websites, the same way you'd type a company's URL directly into your browser.

Across just 50 prompts, GPT-5.4 sent 156 queries with site: operators. No previous model used site: operators at all. 37% of all GPT-5.4 query types are site: searches.

The pattern it follows is consistent:

Phase 1 — Brand verification

GPT-5.4 already knows which brands are relevant (from training data). It queries their domains directly — something like site:hubspot.com pricing Sales Hub 2026 — retrieving pricing, features, and product details from the source.

Phase 2 — Third-party validation

It then cross-references against G2, Capterra, and Shopify App Store reviews to validate what it found.


What does this all mean? GPT-5.4 doesn't discover your brand through search. It already knows you exist from training data, and then comes directly to your site to read it.

Your brand signals — how widely you're mentioned across the web, how consistently your entity appears — determine whether GPT-5.4 knows you exist at all.

8.5x more queries

Average sub-queries fired per prompt by GPT-5.4, vs. 1.0 for GPT-5.3. GPT-5.4 also sent 156 site: operator queries across 50 prompts — a behavior no previous model used.

GPT-5.4 · Premium
8.5 queries
GPT-5.3 · Default
1.0 query
156 site: operator queries 50 prompts tested No prior model used site: operators

Source: Writesonic ChatGPT Citation Study, March 2026

Does my Google ranking still matter for AI citation?

For GPT-5.3, Google rank still partially matters — 47% of its citations come from Google-ranked domains. For GPT-5.4, 75% of cited domains don't appear in Google or Bing results for the same query.

It depends on which model you're optimizing for — and this is the nuance most GEO advice misses. (If you're new to the GEO vs. SEO distinction, my post on SEO foundations is a good place to start.)

For GPT-5.3 (the default model most ChatGPT users see), traditional SEO still has pull. About 47% of its citations come from Google-ranked domains.

This is also the model where a small number of third-party publishers act as gatekeepers — Forbes, TechRadar, Tom's Guide, and Reddit collectively dominate its citation pool. If those outlets haven't covered you, you're largely invisible to GPT-5.3.

But for GPT-5.4, the picture inverts.

75% of the domains it cited didn't appear in Google or Bing results for the same user prompt. In specific tests — prompts about "Shopify vs WooCommerce" and "best marketing agencies" — zero of GPT-5.4's cited domains appeared in traditional search results at all.

What this means in practice

GPT-5.3 · Default

SEO + digital PR matters. Get covered by the gatekeepers.

GPT-5.4 · Premium

Training data presence + a readable, content-complete website matters.

This is also where the page-type data gets interesting.

GPT-5.4's citation behavior differs depending on what it reads. The shift away from blog posts toward commercial pages is one of the more interesting format changes the study shows — particularl for someone in the content business like me.

Citation share by page type  ·  GPT-5.3 vs GPT-5.4

Page type GPT-5.3 citations GPT-5.4 citations
Blog / article pages 9232% share
618% share
Homepage / root pages 4215% share
16122% share
Pricing pages 41% share
13819% share
35x ↑
Product / feature pages 135% share
7310% share

Think of GPT-5.3 as a blog reader — it wants content that explains your category.

GPT-5.4 is a buyer researcher — it wants pricing, features, and comparisons. 51% of GPT-5.4's citations land on commercial pages.

The specific consequence for "Contact Sales" pricing pages: GPT-5.4 reads your pricing page, finds no actual numbers, and moves on to a competitor that publishes them.

What happens on comparison queries — and why that matters most

On comparison queries, GPT-5.3 cited zero brand websites across all prompts. GPT-5.4 cited brand websites 83–100% of the time on the same queries.

The comparison prompt data was one part that really stuck out to me — partially because it has a ton to do with comparison questions.

Think of the last time you searched for a product, and then searched for data on that product compared to other options.

When a user searches "Brand A vs Brand B vs Brand C":

  • GPT-5.3 cited zero brand websites across all comparison queries in the dataset

  • GPT-5.4 cited brand websites 83–100% of the time on the same prompts

Here’s what the study searched, and how the results looked:

Comparison query citations  ·  GPT-5.3 vs GPT-5.4

Prompt GPT-5.3 cites GPT-5.4 cites
Best CRM for B2B SaaS 3rd-party only designrevision.com techradar.com Brand sites direct hubspot.com salesforce.com
QuickBooks vs Xero vs FreshBooks 3rd-party only gentlefrog.com technologyadvice.com Brand sites direct freshbooks.com quickbooks.com
Notion vs Obsidian vs Roam 3rd-party only xp-pen.com medium.com Brand sites direct obsidian.md notion.so

If someone searches "Your Brand vs Competitor A vs Competitor B" in ChatGPT premium, GPT-5.4 goes directly to your comparison or pricing page.

This is huge — If that page doesn't exist (or is unhelpful) or isn't readable by HTML-only crawlers, you lose that citation.

What you include on that page — real numbers, named features, structured comparisons — is what determines whether you show up.

What can ChatGPT read on your site?

ChatGPT, Claude, and Gemini execute zero JavaScript. Content that loads via client-side React or lazy-loading is completely invisible to all three. Your title tag is your most important metadata — not your meta description or JSON-LD.

When GPT-5.4 is visiting your pricing page via site: operators, you need to know a bit about what it’s doing with the data it's ingesting.

Thankfully, we’ve got a companion Writesonic crawler study (written March 31, 2026) that tested 62 AI crawler behaviors across six major LLMs. The findings are not pretty for most websites out there:

  • ChatGPT, Claude, and Gemini are HTML-only parsers. They execute zero JavaScript. If your pricing table or feature list loads via client-side React, these crawlers see a blank div — nothing else.

  • JSON-LD schema scored 0/6 across AI assistants. Your <title> tag is your most valuable metadata for AI — not your meta description or the structured data in your <head>.

  • CSS-hidden content IS readable — accordions, tabs, and collapsed FAQs are visible in raw HTML. AI reads the markup, not the rendered state.

  • Lazy-loaded content scored 0/6. Nothing that loads on scroll reaches any AI crawler.

  • No AI scrolls your page — ever.

The practical takeaway connects directly to how you structure your pages.

Every bit of key content a user should know when they look up a topic on ChatGPT — pricing, features, comparisons — needs to live in server-rendered static HTML body text, not in JavaScript components or CSS pseudo-elements.

What should brands do about this? A two-track strategy

Because GPT-5.3 and GPT-5.4 cite completely different sources for completely different reasons, a GEO strategy in 2026 requires two parallel tracks — one for gatekeeper coverage, one for brand site readability.

The core insight from this data: you're not playing one game. You're playing two simultaneously, against different rules.

Track 1 — GPT-5.3 (the default; most ChatGPT users)

Goal: Get cited by the gatekeepers. GPT-5.3 sources 92% of its citations from third-party publishers. Your strategy here is essentially digital PR.

  • Pursue coverage in Forbes, TechRadar, Tom's Guide, and Reddit — these four domains dominate GPT-5.3's citation pool

  • Traditional SEO still matters here — 47% of GPT-5.3 citations come from Google-ranked domains

  • Build third-party review presence, contributed content, and earned media. The brand signals that earn AI citations are largely the same signals that earn editorial coverage.

Track 2 — GPT-5.4 (premium; purchasing decision queries)

Goal: Make your brand site citation-ready for when GPT-5.4 comes looking — because it will, if it knows you exist.

  • Fix your pricing page first. GPT-5.4 cited 138 pricing pages across 50 prompts. It reads for real numbers. Publish actual tiers, features, and comparisons.

  • Ensure server-side rendering — not client-side JS — for any page with pricing, product, or comparison content

  • Build your G2 and Capterra profiles — GPT-5.4 validates every brand against review platforms in Phase 2 of its research pattern

  • Expand your homepage and product/service pages — these are now the most-cited page types

  • Set up GA4 tracking for utm_source=chatgpt.com now. GPT-5.4 appends UTM parameters to ~87% of its citations. As adoption grows, this becomes a measurable traffic channel comparable to paid search.


If you're not sure whether your current content is set up for either track, a content audit for AI citability is the fastest way to find out where the gaps are. And I can help with that!

Work with Brad

Your content should be seen — by AI
and by the people who are buying.

If this post made you wonder whether your site is set up for either model, that’s worth finding out. I help B2B and SaaS brands build content that gets cited, gets found, and sounds like them — not like everyone else’s AI output.

GEO Content Audits Brand Voice Strategy AI-Native Copywriting Content Strategy
Let’s talk  →

Frequently Asked Questions

  • GPT-5.3 is the free default ChatGPT model, which cites brand websites only 8% of the time and relies primarily on third-party publishers like Forbes and TechRadar. 

    GPT-5.4 is the premium thinking model, which cites brand websites 56% of the time using direct site: operator queries. According to a March 2026 Writesonic study, the two models share only 7% of citations on average for the same prompts.

  • It depends on the model. For GPT-5.3, about 47% of its citations come from Google-ranked domains, so traditional SEO still has influence. 

    For GPT-5.4, 75% of cited domains don't appear in Google or Bing results for the same query — it discovers brands through training data and visits them directly. The relationship between SEO and GEO is more nuanced now than most guides suggest.

  • ChatGPT, Claude, and Gemini are HTML-only parsers — they execute zero JavaScript. If your site uses client-side React to render pricing tables or product content, AI crawlers see a blank page. 

    The AI crawler study found that lazy-loaded content, JSON-LD schema, and CSS pseudo-elements are all invisible to these models.

    Content must be in static, server-rendered HTML body text to be readable.

  • A two-track GEO strategy acknowledges that GPT-5.3 (free default) and GPT-5.4 (premium) cite completely different sources for different reasons. 

    1. Track 1 focuses on earning coverage from third-party gatekeepers GPT-5.3 relies on.

    2. Track 2 focuses on making your own site citation-ready for GPT-5.4's direct brand queries. 

    Understanding what to include in your content for each model is where the strategy starts.

  • ChatGPT appends utm_source=chatgpt.com to approximately 87% of GPT-5.4's citations.

    Set up a GA4 segment filtering for this UTM source now to establish a baseline. 

    If you're not sure your site is optimized to receive and convert that traffic, start with a GEO content audit to identify the gaps.

Brad Bartlett — Copywriter and Content Strategist based in Kansas City

Written by

Brad Bartlett

Brad is a copywriter and content strategist who helps creators, brands, and organizations build content that's actually worth reading — and built to be found. He specializes in conversion-focused copy, brand voice, and SEO and AI search optimization, with a straightforward philosophy: great content has to be authentic before it can perform. He works comfortably across the AI content space, helping clients use the tools without losing the voice. Fiverr Pro vetted, 4.9 stars out of 5 across 1,600+ clients.

Next
Next

What Should I Include in My Content to Get Cited by AI?