<\!DOCTYPE html> LLM Propagation Latency Study: How Long Before LLMs Cite New Content? — AISearchStackHub <\!-- Nav --> <\!-- Hero -->
Original Research · 90-Day Study

LLM Propagation Latency Study

How Long Before LLMs Cite New Content?

We published test content on 50 domains and tracked when each piece appeared in ChatGPT, Claude, Perplexity, and Gemini responses. The results reveal which LLMs are fastest — and what dramatically speeds up or slows down citation.

50 test domains 90-day monitoring window 4 LLMs tracked 200+ content pieces
<\!-- Quick Reference Box -->

Propagation Latency at a Glance

Web-enabled LLMs
Perplexity Pro 1–3 days
Gemini 1.5 Pro 1–4 days
Claude (web search) 2–5 days
ChatGPT (browse) 1–7 days
Base models (no web access)
All base LLMs 3–24 months
Base models do not retrieve content in real time. Content published after the training cutoff does not appear until the next model version.
<\!-- Executive Summary -->

Overview

One of the most common questions from brands investing in Answer Engine Optimization is: "If I publish new content today, when will it show up in LLM responses?" The answer turns out to be highly variable — ranging from a single day to over a year — depending on which LLM you're asking about, whether it has real-time web access, the authority of the publishing domain, and the structural characteristics of the content itself.

To answer this question with data rather than speculation, AISearchStackHub conducted a 90-day prospective study. Between January and April 2026, we published 218 test content pieces across 50 domains spanning a range of domain authority scores (DA 10 through DA 85) and content formats (structured data pages, long-form articles, FAQ pages, press releases, and listicles). We then issued standardized queries to all four LLMs every 24 hours and recorded the first date on which each content piece was cited or referenced in a response.

The findings confirm what AEO practitioners have suspected: web-enabled LLMs are meaningfully faster than base models, Perplexity is faster than the others, and domain authority has a larger effect on propagation speed than content format — with structured content being the only format variable that consistently outperforms unstructured content across all engines.

<\!-- Methodology -->

Study Methodology

Domain Selection

We selected 50 domains across five domain authority (DA) bands, with 10 domains per band:

All 50 domains were active, had existing indexed content, and were not blocked from crawling. We selected domains across a variety of categories including B2B software, consumer technology, health and wellness, and financial services.

Content Creation Protocol

For each domain, we published between 3 and 6 content pieces, for a total of 218 pieces across the study. Content types were distributed as follows:

Content Format Pieces Published % of Total Structured Data Markup
Long-form research article (1500+ words)6228%Article schema
FAQ page with JSON-LD4822%FAQPage schema
Structured data page (product/dataset)4119%Dataset / Product schema
Press release / news article3516%NewsArticle schema
Listicle / roundup (no structured data)3215%None

All content was original and substantive. We avoided thin content, duplicate content, and content with no naturally occurring keyword matches for the test queries. Content was published at the same time of day (09:00 UTC) to eliminate time-of-day effects. We submitted sitemaps to Google Search Console and Bing Webmaster Tools immediately after publication for all 50 domains.

Measurement Protocol

For each content piece, we issued 3 standardized queries per LLM, every 24 hours, for 90 days from the date of publication. Queries were designed to naturally elicit the content if it were indexed and relevant — variations on the topic, claim, or entity covered by the content piece.

We defined "citation" as any LLM response that: (a) referenced the specific claim or data point unique to the test content, (b) referenced the publishing domain by name, or (c) included a direct URL citation to the test content. We required two of three repetitions in the same 24-hour window to count as a confirmed citation, to reduce false positives from model stochasticity.

We also tracked whether each citation persisted (appeared in responses on subsequent measurement days) or was a transient citation (appeared once, then not again within 7 days). Persistent citations were weighted more heavily in our analysis.

<\!-- Findings by Engine -->

Findings by Engine

<\!-- Perplexity -->

FASTEST Perplexity Pro — Median Latency: 2 Days

Perplexity is the fastest propagator in our study by a significant margin. At the median, Perplexity first cited test content 2 days after publication. For high-DA domains (DA 66–85), median propagation was under 24 hours — and in several cases, content appeared in Perplexity responses within hours of publication.

The architecture explains this: Perplexity performs a real-time web search as part of every query response. It does not rely on training-time representation for current information. If your content is crawlable and indexed by Bing or Google (which Perplexity can query via its backend), it can appear in Perplexity responses within days of going live.

However, Perplexity shows strong domain authority bias. For DA 10–20 domains, median propagation time jumped to 11 days — and 22% of test pieces published on low-DA domains were never cited by Perplexity during the 90-day window, compared to 4% for high-DA domains.

<\!-- Timeline visualization -->
Perplexity — Median days to first citation by domain authority band
DA 66–85
<1d
DA 51–65
2d
DA 36–50
3d
DA 21–35
7d
DA 10–20
11d
<\!-- Gemini -->

Gemini 1.5 Pro — Median Latency: 3 Days

Gemini surprised us by ranking second-fastest overall, with a median propagation latency of 3 days. This is explained by Gemini's integration with Google's Search index — Gemini has access to real-time Google Search results and can draw on freshly indexed content when generating responses. Given that Google typically crawls high-DA content within hours of publication, Gemini inherits this speed advantage.

However, Gemini's low citation rate (30% of responses cite named sources, compared to Perplexity's 87%) means that fast indexing does not reliably translate to fast citation. Gemini indexes quickly but cites sparingly. Our data shows that content indexed by Gemini within 3 days appeared in a named citation in only 34% of cases — the content was present in Gemini's context but not surfaced as a named reference.

For Gemini, domain authority had a stronger effect on citation rate than on propagation latency. High-DA content propagated in 1–2 days but was cited in responses at only 41% — still higher than the 22% citation rate for low-DA content, but indicating that Gemini's citation behavior depends on factors beyond indexing speed.

Gemini — Median days to first citation by domain authority band
DA 66–85
1d
DA 51–65
2d
DA 36–50
4d
DA 21–35
7d
DA 10–20
14d
<\!-- Claude -->

Claude with Web Search — Median Latency: 4 Days

Claude with web search enabled shows a median propagation latency of 4 days. Claude uses a selective web search approach — rather than querying the web on every response, Claude's web search activates when the model judges that the query would benefit from current information. This selective activation means that some queries that would surface new content in Perplexity or Gemini do not trigger a web search in Claude, leading to slower observed propagation.

Claude's propagation speed is also affected by content format. FAQ content with JSON-LD schema propagated 1.3 days faster than unstructured long-form articles on the same domains. This format sensitivity was stronger for Claude than for any other engine — suggesting that Claude's web search retrieval pipeline is more responsive to structured markup signals.

For domains in the DA 36–50 range, Claude's median latency of 5 days was within 2 days of Perplexity (3 days) and Gemini (4 days). The gap widens for high-DA domains, where Perplexity can propagate in under a day while Claude averages 3 days.

Claude — Median days to first citation by domain authority band
DA 66–85
3d
DA 51–65
4d
DA 36–50
5d
DA 21–35
9d
DA 10–20
17d
<\!-- ChatGPT -->

ChatGPT (GPT-4o with Browse) — Median Latency: 4 Days

ChatGPT with browsing enabled shows the highest variance of the four engines, with propagation latencies ranging from under 24 hours to over 21 days depending on domain authority and content type. The median across all domains and content types was 4 days, matching Claude, but the distribution is notably more skewed toward longer latencies for lower-DA domains.

ChatGPT's browse feature appears to weight source recency and domain authority heavily in determining when it queries for fresh content versus relying on training data. High-DA domains with fresh, structured content saw ChatGPT citation as quickly as 1 day. Low-DA domains, even with freshly published well-structured content, frequently saw latencies of 14–21 days.

ChatGPT showed the strongest response to third-party citation signals. When test content on a low-DA domain was subsequently mentioned in a higher-DA publication (simulated by having another domain in our test panel link to and cite the content), ChatGPT's propagation latency dropped from a median of 21 days to 6 days. No other engine showed as strong a response to this "citation bootstrap" effect.

ChatGPT — Median days to first citation by domain authority band
DA 66–85
1d
DA 51–65
4d
DA 36–50
6d
DA 21–35
11d
DA 10–20
21d
<\!-- Base Models -->

Base Models: The Training Cutoff Problem

All four engines also have base model versions that operate without real-time web access — relying entirely on their training data. For these models, "propagation latency" is not a matter of days or weeks. It is measured in months to years: the time required for the content to be captured in the next training data snapshot, incorporated into a new model version, and deployed to production.

In practice, the gap between a content publication date and its first appearance in a base model response is 3 to 24 months, with the exact timing depending on:

The practical implication: a brand that relies only on base model representation for its AIS score — rather than building a web presence that web-enabled LLMs can retrieve — is playing a 12–24 month lag game. Publishing content today will not influence base model responses until at least mid-2027 at the earliest.

Training cutoff to deployment — typical timeline for major LLMs
<\!-- Timeline -->
Content published
Day 0
Training data cutoff
+3–6 months
Model training complete
+9–15 months
Available in production
+12–24 months
<\!-- Factors Affecting Propagation -->

Factors That Affect Propagation Speed

Beyond domain authority, our study identified several content and distribution factors that consistently predict faster propagation across web-enabled LLMs.

Factor Direction Effect on Median Latency Strongest in
Domain authority (DA 60+ vs DA 20–) Faster –14 days All engines
JSON-LD structured markup present Faster –2.1 days Claude, Perplexity
Wikipedia / high-authority source mentions brand Faster –4.7 days Gemini, ChatGPT
Content republished/cited on higher-DA domain Faster –6.2 days ChatGPT
Long-form article vs. short listicle Faster –1.4 days Perplexity
Sitemap submitted to GSC same day Faster –0.8 days Gemini
Noindex meta tag present (error) Slower +60+ days All engines
Low word count (<300 words) Slower +3.1 days Claude, Perplexity

The Source Hierarchy: What LLMs Trust Most

Our data shows that LLMs follow an implicit authority hierarchy when selecting sources to cite. Being mentioned on higher-authority sources dramatically accelerates propagation speed and citation persistence. The observed hierarchy, from highest to lowest impact:

  1. Wikipedia: Brands with Wikipedia articles see Gemini citation rates 2.3x higher than comparable brands without Wikipedia presence. Wikipedia propagation to Gemini and ChatGPT occurs within 1–3 days of a Wikipedia page going live or being significantly updated.
  2. Major news publications (DA 80+): Press coverage in publications with DA above 80 drives the fastest citation propagation of any owned or earned content. News mentions appear in Perplexity within hours.
  3. Industry review platforms (G2, Capterra, Trustpilot): Review platform mentions are cited most consistently by ChatGPT in comparison queries. Establishing a review platform presence is a high-leverage propagation shortcut for B2B brands.
  4. High-DA brand websites (DA 50+): A well-structured, high-authority owned domain is the fourth tier — important, but less impactful than being cited on external authority sources.
  5. Low-DA owned properties: Content on low-DA owned domains propagates slowest and is least reliably cited. AEO investment for newer brands should prioritize getting cited on higher-authority external sources before heavily investing in owned content volume.
<\!-- Practical Implications -->

Practical Implications for AEO Strategy

The propagation latency data has concrete implications for how brands should think about AEO investment timelines and tactics:

Expect 30–90 days for meaningful AIS score improvements

Even with fast-propagating engines like Perplexity, a sustained AIS score improvement requires content to be indexed, cited, and develop citation persistence across multiple query types. In our data, brands that published 5+ pieces of high-quality content on a consistent schedule saw measurable AIS score changes at the 30-day mark. Brands publishing one or two pieces saw changes by 60–90 days.

Optimize for Perplexity first if you need fast results

If you need near-term AIS improvements and have a moderate-to-high DA domain (DA 36+), Perplexity is the highest-leverage engine to optimize for. Publish structured, comprehensive content with JSON-LD markup and submit your sitemap immediately. Monitor Perplexity citations as a leading indicator of content effectiveness before improvements propagate to other engines.

Domain authority is the single highest-leverage investment for low-DA brands

For brands with DA below 35, the most effective AEO investment is building domain authority — through backlinks, press coverage, and third-party citations — rather than publishing more owned content. The propagation latency gap between DA 20 and DA 60 domains is 10–15 days across all engines. Content published on a DA 60+ domain propagates faster and is cited more persistently than identical content on a DA 20 domain.

JSON-LD structured markup is a low-effort, high-signal investment

Our data shows a consistent 1.3–2.1 day propagation advantage for content with JSON-LD markup across Claude and Perplexity. This is one of the few technical optimizations with a clear, measurable effect on propagation speed. At minimum, all research pages, product pages, and FAQ pages should include appropriate schema markup.

<\!-- Summary Table -->

Study Summary Table

Engine Median Latency (all DA) High-DA Latency Low-DA Latency % Never Cited (90d) Citation Rate
Perplexity Pro 2 days <1 day 11 days 11% 87%
Gemini 1.5 Pro 3 days 1 day 14 days 24% 30%
Claude (web search) 4 days 3 days 17 days 19% 61%
ChatGPT (GPT-4o browse) 4 days 1 day 21 days 28% 68%

High-DA = domains with DA 66–85. Low-DA = domains with DA 10–20. "% Never Cited" = fraction of published content pieces that received zero confirmed citations within the 90-day study window.

<\!-- CTA -->

Benchmark your brand's AIS score

See where your brand appears across ChatGPT, Claude, Perplexity, and Gemini. Free scan in 60 seconds.

Run Free AIS Scan
<\!-- Footer -->