If someone asks ChatGPT "what's the best tool for X" and your brand isn't mentioned, you have a visibility problem โ€” and it's not a Google problem. It's an LLM problem, and fixing it requires a different playbook than traditional SEO.

This guide gives you the full step-by-step framework for improving your brand's visibility across ChatGPT, Claude, Perplexity, and Gemini. It's structured as a playbook: sequential steps, each building on the last, with measurable outcomes.

Why LLM Visibility Matters Now

ChatGPT
500M+
weekly active users
Perplexity
15M+
daily active users
Claude
~30M
monthly active users
Gemini
Google
integrated reach

These numbers matter because the usage pattern is different from traditional search. When someone asks an LLM a commercial question โ€” "what CRM should I use?", "best accounting software for freelancers?" โ€” they're not getting 10 blue links. They're getting a single narrative answer with 2โ€“5 sources. If you're not in those sources, you don't exist for that query.

Our data across 500+ brands shows that 74% score below 40/100 on the AIS Index โ€” meaning most brands are nearly invisible in AI search despite being well-established in Google results. The LLM citation landscape is still early, which means there's significant upside for brands that move now.

Data Point

Brands that score above 70 on the AIS Index are cited in AI responses for their target queries 3โ€“5ร— more often than brands scoring below 40 โ€” even when those lower-scoring brands have stronger traditional SEO metrics.

How LLMs Decide What to Cite

LLMs don't crawl the web at query time. They operate from a knowledge base built during training (updated on different schedules per engine) and, for Perplexity specifically, from live web retrieval. Understanding this distinction is foundational to this playbook.

What earns you a citation in LLM responses:

Step 1: Get Your Baseline Score

You can't optimize what you can't measure. Before doing any of the tactical work below, run a scan to get your current AIS Index score across all four engines.

The scan returns:

Run your free baseline scan โ†’

This baseline matters because the tactics in this playbook have different ROI depending on your weakest dimension. If your Visibility score is low (you're not being mentioned at all), the fix is different from a low Authority score (you're mentioned but not trusted as a source). The scan tells you where to start.

Save your baseline score. The entire point of this playbook is to improve it. You'll want to re-scan after each major implementation to measure impact. LLM knowledge bases update on roughly 4โ€“8 week cycles, so expect a lag between action and score change.

Step 2: Set Up llms.txt

The llms.txt file is to LLMs what robots.txt is to traditional search crawlers โ€” but instead of restricting access, it's a machine-readable file that tells AI crawlers exactly what your brand is, what you do, and which pages to prioritize for ingestion.

Place this file at https://yourdomain.com/llms.txt. Structure it in Markdown:

# [Your Company Name]

> [One sentence: what you do and who you serve]

[2-3 paragraph factual description. No marketing language. Think Wikipedia-style:
what the company does, when it was founded, what problems it solves, key differentiators.]

## Key Pages

- [Homepage](https://yourdomain.com): [Brief description]
- [Product](https://yourdomain.com/product): [Brief description]
- [Pricing](https://yourdomain.com/pricing): [Brief description]
- [Documentation](https://docs.yourdomain.com): [Brief description]

## What We Do

[Bulleted list of core use cases โ€” written as factual capabilities, not marketing claims]

## Company Facts

- Founded: [Year]
- Headquarters: [City, Country]
- Category: [Industry/category]
- Customers: [Scale / key use cases]

The key principles for an effective llms.txt:

Step 3: Add Structured Data

Schema.org structured data doesn't just help Google โ€” it provides LLMs with unambiguous, machine-readable facts about your organization. At minimum, add Organization schema to your homepage and product pages.

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "Organization",
  "name": "[Your Company]",
  "url": "https://yourdomain.com",
  "description": "[One-sentence factual description]",
  "foundingDate": "[Year]",
  "sameAs": [
    "https://en.wikipedia.org/wiki/[YourCompany]",
    "https://www.crunchbase.com/organization/[slug]",
    "https://www.linkedin.com/company/[slug]",
    "https://g2.com/products/[slug]"
  ],
  "areaServed": "Worldwide",
  "knowsAbout": ["[Topic 1]", "[Topic 2]", "[Topic 3]"]
}
</script>

The sameAs property is critical. It tells LLMs that your domain is the same entity as your Wikipedia page, Crunchbase listing, G2 profile, and LinkedIn presence. This entity resolution is how LLMs consolidate authority signals โ€” without it, a mention on G2 and a mention on your website might be treated as different entities.

Beyond Organization schema, add per-page schema types that match your content:

Step 4: FAQ Schema for Answer Eligibility

FAQ schema is your most direct path to getting cited in AI search. When LLMs look for answers to specific questions, they prioritize content that clearly states both the question and the answer โ€” and FAQ schema makes that structure machine-readable.

The pattern that works:

<script type="application/ld+json">
{
  "@context": "https://schema.org",
  "@type": "FAQPage",
  "mainEntity": [
    {
      "@type": "Question",
      "name": "What is [your product category]?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "[Your company name] is a [category] tool that [specific functionality]. It [what it does for users]. Key features include [feature 1], [feature 2], and [feature 3]."
      }
    },
    {
      "@type": "Question",
      "name": "How does [your product] compare to [competitor]?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "[Factual comparison. Be accurate. LLMs will cross-reference claims.]"
      }
    }
  ]
}
</script>

FAQ pages to create (each targeting a question your prospects actually ask LLMs):

Step 5: Create Citation-Worthy Content

LLMs don't cite landing pages. They cite content that authoritatively answers specific questions. The gap between what most brands publish (feature descriptions, use cases, customer stories) and what LLMs cite (research, data, frameworks, direct answers) is the core content gap to close.

Content formats LLMs cite most

Content Type Citation Rate Why It Works
Original research with data Very High LLMs cite statistics and benchmarks. If you have the data, you get the citation.
Definitional guides ("What is X") High Entity definition questions are extremely common LLM queries.
Comparison content High "X vs Y" queries are a top use case for AI search. Be the reference, not the subject.
Step-by-step how-to guides High Process questions are direct asks to LLMs. Structured content with numbered steps is ideal.
Framework and methodology content Medium-High If your team invented a named framework, that's a citation asset with indefinite lifespan.
Product landing pages Low Promotional language signals low factual density. Rarely cited directly.

The format matters as much as the content. Structure your citeable assets for machine parsing:

Step 6: Build Authority Signals

Authority is the second-largest component of the AIS Index (30%), and it's determined by where your brand appears across the web โ€” not just your own domain. LLMs heavily weight signals from specific high-authority sources:

Priority authority sources

The principle: citation moats are built on the open web, not your own domain. Your content is the seed; third-party citations are the compounding asset.

Step 7: Measure and Iterate

LLM visibility is not a one-time optimization. Knowledge bases update, competitors act, and your score reflects the current competitive landscape โ€” not just your absolute effort.

A sustainable measurement cadence:

What to look for in your score trends:

Engine-Specific Tactics

The four major LLMs have meaningfully different behaviors that affect what works:

ChatGPT (OpenAI)

Training data cutoff creates a lag between publishing content and appearing in responses. Prioritize content that's been indexed and available for a full training cycle. ChatGPT heavily weights Wikipedia and Crunchbase for entity information. For product queries, G2 and review platforms matter.

Perplexity

Perplexity performs live retrieval โ€” it actually fetches pages at query time, not just from training data. This means your SEO fundamentals matter here more than with other engines. Fresh content, fast page loads, accessible crawling, and recent publish dates all influence Perplexity citations directly.

Claude (Anthropic)

Claude shows a stronger preference for authoritative, well-structured content. The "first 150 words" pattern matters more here โ€” Claude often cites content that opens with a direct, comprehensive answer rather than context-setting or narrative buildup. Technical documentation and specification-style content performs well.

Gemini (Google)

Gemini has access to Google's full index and knows your domain's existing authority score. This makes Gemini the engine where traditional SEO strength (domain authority, backlinks, engagement signals) carries the most weight. However, it also has a bias toward Google-native properties (YouTube, Google Business, Google Workspace docs), so presence there matters.

Key cross-engine insight: No single tactic optimizes for all four engines equally. The AIS Index breaks out per-engine scores precisely because the levers are different. A brand with a 90 on Perplexity and a 40 on Claude needs a fundamentally different optimization strategy than a brand averaging 65 across all four.