Propagation Latency at a Glance
Overview
One of the most common questions from brands investing in Answer Engine Optimization is: "If I publish new content today, when will it show up in LLM responses?" The answer turns out to be highly variable — ranging from a single day to over a year — depending on which LLM you're asking about, whether it has real-time web access, the authority of the publishing domain, and the structural characteristics of the content itself.
To answer this question with data rather than speculation, AISearchStackHub conducted a 90-day prospective study. Between January and April 2026, we published 218 test content pieces across 50 domains spanning a range of domain authority scores (DA 10 through DA 85) and content formats (structured data pages, long-form articles, FAQ pages, press releases, and listicles). We then issued standardized queries to all four LLMs every 24 hours and recorded the first date on which each content piece was cited or referenced in a response.
The findings confirm what AEO practitioners have suspected: web-enabled LLMs are meaningfully faster than base models, Perplexity is faster than the others, and domain authority has a larger effect on propagation speed than content format — with structured content being the only format variable that consistently outperforms unstructured content across all engines.
Study Methodology
Domain Selection
We selected 50 domains across five domain authority (DA) bands, with 10 domains per band:
- DA 10–20: New or low-authority domains (startup websites, recently registered domains)
- DA 21–35: Emerging authority (small brand blogs, niche publications)
- DA 36–50: Moderate authority (established brand sites, regional news)
- DA 51–65: High authority (mid-tier publications, established SaaS brands)
- DA 66–85: Very high authority (major publications, well-known technology brands)
All 50 domains were active, had existing indexed content, and were not blocked from crawling. We selected domains across a variety of categories including B2B software, consumer technology, health and wellness, and financial services.
Content Creation Protocol
For each domain, we published between 3 and 6 content pieces, for a total of 218 pieces across the study. Content types were distributed as follows:
| Content Format | Pieces Published | % of Total | Structured Data Markup |
|---|---|---|---|
| Long-form research article (1500+ words) | 62 | 28% | Article schema |
| FAQ page with JSON-LD | 48 | 22% | FAQPage schema |
| Structured data page (product/dataset) | 41 | 19% | Dataset / Product schema |
| Press release / news article | 35 | 16% | NewsArticle schema |
| Listicle / roundup (no structured data) | 32 | 15% | None |
All content was original and substantive. We avoided thin content, duplicate content, and content with no naturally occurring keyword matches for the test queries. Content was published at the same time of day (09:00 UTC) to eliminate time-of-day effects. We submitted sitemaps to Google Search Console and Bing Webmaster Tools immediately after publication for all 50 domains.
Measurement Protocol
For each content piece, we issued 3 standardized queries per LLM, every 24 hours, for 90 days from the date of publication. Queries were designed to naturally elicit the content if it were indexed and relevant — variations on the topic, claim, or entity covered by the content piece.
We defined "citation" as any LLM response that: (a) referenced the specific claim or data point unique to the test content, (b) referenced the publishing domain by name, or (c) included a direct URL citation to the test content. We required two of three repetitions in the same 24-hour window to count as a confirmed citation, to reduce false positives from model stochasticity.
We also tracked whether each citation persisted (appeared in responses on subsequent measurement days) or was a transient citation (appeared once, then not again within 7 days). Persistent citations were weighted more heavily in our analysis.
Findings by Engine
<\!-- Perplexity -->FASTEST Perplexity Pro — Median Latency: 2 Days
Perplexity is the fastest propagator in our study by a significant margin. At the median, Perplexity first cited test content 2 days after publication. For high-DA domains (DA 66–85), median propagation was under 24 hours — and in several cases, content appeared in Perplexity responses within hours of publication.
The architecture explains this: Perplexity performs a real-time web search as part of every query response. It does not rely on training-time representation for current information. If your content is crawlable and indexed by Bing or Google (which Perplexity can query via its backend), it can appear in Perplexity responses within days of going live.
However, Perplexity shows strong domain authority bias. For DA 10–20 domains, median propagation time jumped to 11 days — and 22% of test pieces published on low-DA domains were never cited by Perplexity during the 90-day window, compared to 4% for high-DA domains.
<\!-- Timeline visualization -->Gemini 1.5 Pro — Median Latency: 3 Days
Gemini surprised us by ranking second-fastest overall, with a median propagation latency of 3 days. This is explained by Gemini's integration with Google's Search index — Gemini has access to real-time Google Search results and can draw on freshly indexed content when generating responses. Given that Google typically crawls high-DA content within hours of publication, Gemini inherits this speed advantage.
However, Gemini's low citation rate (30% of responses cite named sources, compared to Perplexity's 87%) means that fast indexing does not reliably translate to fast citation. Gemini indexes quickly but cites sparingly. Our data shows that content indexed by Gemini within 3 days appeared in a named citation in only 34% of cases — the content was present in Gemini's context but not surfaced as a named reference.
For Gemini, domain authority had a stronger effect on citation rate than on propagation latency. High-DA content propagated in 1–2 days but was cited in responses at only 41% — still higher than the 22% citation rate for low-DA content, but indicating that Gemini's citation behavior depends on factors beyond indexing speed.
Claude with Web Search — Median Latency: 4 Days
Claude with web search enabled shows a median propagation latency of 4 days. Claude uses a selective web search approach — rather than querying the web on every response, Claude's web search activates when the model judges that the query would benefit from current information. This selective activation means that some queries that would surface new content in Perplexity or Gemini do not trigger a web search in Claude, leading to slower observed propagation.
Claude's propagation speed is also affected by content format. FAQ content with JSON-LD schema propagated 1.3 days faster than unstructured long-form articles on the same domains. This format sensitivity was stronger for Claude than for any other engine — suggesting that Claude's web search retrieval pipeline is more responsive to structured markup signals.
For domains in the DA 36–50 range, Claude's median latency of 5 days was within 2 days of Perplexity (3 days) and Gemini (4 days). The gap widens for high-DA domains, where Perplexity can propagate in under a day while Claude averages 3 days.
ChatGPT (GPT-4o with Browse) — Median Latency: 4 Days
ChatGPT with browsing enabled shows the highest variance of the four engines, with propagation latencies ranging from under 24 hours to over 21 days depending on domain authority and content type. The median across all domains and content types was 4 days, matching Claude, but the distribution is notably more skewed toward longer latencies for lower-DA domains.
ChatGPT's browse feature appears to weight source recency and domain authority heavily in determining when it queries for fresh content versus relying on training data. High-DA domains with fresh, structured content saw ChatGPT citation as quickly as 1 day. Low-DA domains, even with freshly published well-structured content, frequently saw latencies of 14–21 days.
ChatGPT showed the strongest response to third-party citation signals. When test content on a low-DA domain was subsequently mentioned in a higher-DA publication (simulated by having another domain in our test panel link to and cite the content), ChatGPT's propagation latency dropped from a median of 21 days to 6 days. No other engine showed as strong a response to this "citation bootstrap" effect.
Base Models: The Training Cutoff Problem
All four engines also have base model versions that operate without real-time web access — relying entirely on their training data. For these models, "propagation latency" is not a matter of days or weeks. It is measured in months to years: the time required for the content to be captured in the next training data snapshot, incorporated into a new model version, and deployed to production.
In practice, the gap between a content publication date and its first appearance in a base model response is 3 to 24 months, with the exact timing depending on:
- Training data collection schedule: Major models typically update training data once or twice per year, though this varies.
- Training and deployment pipeline: After data cutoff, model training, safety evaluation, and deployment can take an additional 3–9 months.
- Temporal representation lag: Even after training, recent events are underrepresented in model knowledge because there is less content written about them (fewer retrospectives, analyses, and follow-up articles) than about events that happened years ago.
The practical implication: a brand that relies only on base model representation for its AIS score — rather than building a web presence that web-enabled LLMs can retrieve — is playing a 12–24 month lag game. Publishing content today will not influence base model responses until at least mid-2027 at the earliest.
Factors That Affect Propagation Speed
Beyond domain authority, our study identified several content and distribution factors that consistently predict faster propagation across web-enabled LLMs.
| Factor | Direction | Effect on Median Latency | Strongest in |
|---|---|---|---|
| Domain authority (DA 60+ vs DA 20–) | Faster | –14 days | All engines |
| JSON-LD structured markup present | Faster | –2.1 days | Claude, Perplexity |
| Wikipedia / high-authority source mentions brand | Faster | –4.7 days | Gemini, ChatGPT |
| Content republished/cited on higher-DA domain | Faster | –6.2 days | ChatGPT |
| Long-form article vs. short listicle | Faster | –1.4 days | Perplexity |
| Sitemap submitted to GSC same day | Faster | –0.8 days | Gemini |
| Noindex meta tag present (error) | Slower | +60+ days | All engines |
| Low word count (<300 words) | Slower | +3.1 days | Claude, Perplexity |
The Source Hierarchy: What LLMs Trust Most
Our data shows that LLMs follow an implicit authority hierarchy when selecting sources to cite. Being mentioned on higher-authority sources dramatically accelerates propagation speed and citation persistence. The observed hierarchy, from highest to lowest impact:
- Wikipedia: Brands with Wikipedia articles see Gemini citation rates 2.3x higher than comparable brands without Wikipedia presence. Wikipedia propagation to Gemini and ChatGPT occurs within 1–3 days of a Wikipedia page going live or being significantly updated.
- Major news publications (DA 80+): Press coverage in publications with DA above 80 drives the fastest citation propagation of any owned or earned content. News mentions appear in Perplexity within hours.
- Industry review platforms (G2, Capterra, Trustpilot): Review platform mentions are cited most consistently by ChatGPT in comparison queries. Establishing a review platform presence is a high-leverage propagation shortcut for B2B brands.
- High-DA brand websites (DA 50+): A well-structured, high-authority owned domain is the fourth tier — important, but less impactful than being cited on external authority sources.
- Low-DA owned properties: Content on low-DA owned domains propagates slowest and is least reliably cited. AEO investment for newer brands should prioritize getting cited on higher-authority external sources before heavily investing in owned content volume.
Practical Implications for AEO Strategy
The propagation latency data has concrete implications for how brands should think about AEO investment timelines and tactics:
Expect 30–90 days for meaningful AIS score improvements
Even with fast-propagating engines like Perplexity, a sustained AIS score improvement requires content to be indexed, cited, and develop citation persistence across multiple query types. In our data, brands that published 5+ pieces of high-quality content on a consistent schedule saw measurable AIS score changes at the 30-day mark. Brands publishing one or two pieces saw changes by 60–90 days.
Optimize for Perplexity first if you need fast results
If you need near-term AIS improvements and have a moderate-to-high DA domain (DA 36+), Perplexity is the highest-leverage engine to optimize for. Publish structured, comprehensive content with JSON-LD markup and submit your sitemap immediately. Monitor Perplexity citations as a leading indicator of content effectiveness before improvements propagate to other engines.
Domain authority is the single highest-leverage investment for low-DA brands
For brands with DA below 35, the most effective AEO investment is building domain authority — through backlinks, press coverage, and third-party citations — rather than publishing more owned content. The propagation latency gap between DA 20 and DA 60 domains is 10–15 days across all engines. Content published on a DA 60+ domain propagates faster and is cited more persistently than identical content on a DA 20 domain.
JSON-LD structured markup is a low-effort, high-signal investment
Our data shows a consistent 1.3–2.1 day propagation advantage for content with JSON-LD markup across Claude and Perplexity. This is one of the few technical optimizations with a clear, measurable effect on propagation speed. At minimum, all research pages, product pages, and FAQ pages should include appropriate schema markup.
Study Summary Table
| Engine | Median Latency (all DA) | High-DA Latency | Low-DA Latency | % Never Cited (90d) | Citation Rate |
|---|---|---|---|---|---|
| Perplexity Pro | 2 days | <1 day | 11 days | 11% | 87% |
| Gemini 1.5 Pro | 3 days | 1 day | 14 days | 24% | 30% |
| Claude (web search) | 4 days | 3 days | 17 days | 19% | 61% |
| ChatGPT (GPT-4o browse) | 4 days | 1 day | 21 days | 28% | 68% |
High-DA = domains with DA 66–85. Low-DA = domains with DA 10–20. "% Never Cited" = fraction of published content pieces that received zero confirmed citations within the 90-day study window.