AEO Glossary
Complete glossary of Answer Engine Optimization terms, metrics, and intelligence scores used in AEO Optima for AI visibility monitoring and optimization.
Overview
This glossary is a comprehensive reference for Answer Engine Optimization terminology, metrics, and intelligence scores. Whether you are new to AI visibility monitoring, evaluating AEO tools, or an experienced practitioner, these definitions cover every concept you will encounter when tracking and optimizing how AI models represent your brand.
Terms are organized alphabetically. Each definition explains what the term means, why it matters, and how it is used in practice.
A
AEO (Answer Engine Optimization)
The practice of optimizing your brand, content, and online presence to appear accurately and prominently in AI-generated answers. AEO is to AI chatbots (ChatGPT, Claude, Gemini, Perplexity, Grok) what SEO is to search engines (Google, Bing). While SEO focuses on ranking in search results, AEO focuses on being cited, mentioned, and recommended when AI models respond to user queries. See What is AEO? for a full introduction.
AEO Score
A rating from 0 to 100 that measures how well a specific web page is optimized for AI citation. The score evaluates schema markup, entity clarity, content structure, FAQ presence, and other factors that influence whether AI models will reference the page when generating answers. Calculated by the Page Optimization tool within AEO Optima.
Action Verification
The process of measuring whether a specific optimization action actually improved AI visibility. AEO Optima tracks each recommended action through its lifecycle — recommendation, implementation, and impact measurement — then correlates the action with visibility changes to determine its efficacy. Over time, this creates an evidence base of what works for your brand and category.
AI Analysis
AI-powered deep analysis of snapshot patterns that goes beyond basic mention detection. Includes sentiment drivers, content gaps, opportunity scoring, and comprehensive analysis modes. Results generate actionable insights with specific recommendations for improving brand visibility across AI platforms.
AI Brand Score
A composite metric that represents your brand's overall health across AI answer engines. Derived from multiple dimensions including visibility, sentiment, rank position, citation quality, and cross-model consistency. The AI Brand Score provides a single number that tracks your brand's AI presence over time.
AI Visibility
The degree to which your brand appears in AI-generated answers across different models and query types. AI visibility is distinct from traditional search visibility — a brand can rank #1 on Google and still have zero AI visibility if AI models do not mention it in their responses. AEO Optima measures AI visibility as a percentage of monitored prompts where your brand is mentioned.
Anomaly Detection
Completeness-aware statistical detection of unusual changes in visibility, sentiment, or mention-rate metrics. Uses Z-score analysis with Bonferroni correction across three simultaneous tests. Only analyzes days with sufficient data (at least 80% of expected snapshots and at least 10 snapshots absolute), excludes the current day, and marks anomalies as persistent when two consecutive points are anomalous. This prevents false alarms from sparse data or single spikes.
B
BNCI (Brand Narrative Coherence Index)
One of AEO Optima's 6 proprietary intelligence scores. BNCI measures how consistently AI engines tell your brand's story across different prompts and models. A high BNCI means AI engines have a unified understanding of your brand narrative — they describe your value proposition, differentiators, and positioning in a consistent way. A low BNCI indicates fragmented or contradictory narratives that may confuse users who ask different AI models about your brand.
Bootstrap Prediction Interval
A 95% prediction interval for visibility forecasts built by resampling the forecasting model's residuals 500 times and taking the 2.5th and 97.5th percentiles. This is an honest alternative to assumed-normal intervals — the interval widens naturally where the model is uncertain, without making distributional assumptions that may not hold for AI visibility data.
Brand Facts
Verified brand attributes (founding year, headquarters, key products, pricing, market position, etc.) stored in your project settings. Brand facts serve two purposes: the accuracy checker compares AI responses against them to detect hallucinations, and the entity analyzer uses them to score how clearly AI models understand your brand's identity.
Brand Overlap
A metric that measures how often your brand and a specific competitor appear together in the same AI-generated response. High brand overlap indicates that AI models frequently consider both brands in the same competitive context — useful for understanding which competitors AI models associate with your brand.
Branded Prompt
A prompt segment classification for queries that mention your brand name but no competitors (e.g., "What do people think of TechShu?" or "Is AEO Optima good for agencies?"). Analytics filtered to branded prompts show how AI engines describe your brand when users explicitly ask about you.
Building Block
A reusable query template component in the Query Universe system. Building blocks are categorized by type — Core Services, Modifiers, Audiences, Intents, Geographies, and more — and combined by the prompt composer to systematically generate monitoring prompts. This ensures your monitoring covers every dimension of your brand's query space without manual prompt creation.
C
CIPS (Citation Influence & Propagation Score)
One of AEO Optima's 6 proprietary intelligence scores. CIPS measures the strength and quality of citations AI engines use when mentioning your brand. It evaluates source authority, citation positioning within responses, citation frequency across models, and propagation patterns. A high CIPS means your brand is backed by strong, authoritative sources that AI models trust and cite consistently.
Citation Gap
A topic or query area where competitor brands receive citations from AI engines but your brand does not. Citation gap analysis identifies specific sources, publications, or content types that competitors leverage for AI visibility — giving you actionable outreach targets to close the gap. See also CIPS.
CMCS (Cross-Model Consistency Score)
One of AEO Optima's 6 proprietary intelligence scores. CMCS measures how consistently your brand is represented across different AI models (ChatGPT, Claude, Gemini, Perplexity, Grok, and others). A high CMCS means all AI models describe your brand similarly. A low CMCS indicates that some models have incomplete, outdated, or conflicting information — a common situation that requires model-specific optimization strategies.
Competitor Prompt Segment
A prompt segment classification for queries that mention both your brand and one or more competitors, or contain a "brand vs X" comparison pattern (e.g., "AEO Optima vs other AEO tools"). Analytics filtered to competitor prompts reveal your head-to-head positioning in AI-generated comparisons.
Completeness-Aware Detection
A safeguard in AEO Optima's anomaly detection system that excludes days where snapshot capture was partial. A day is analyzed only when it has both at least 80% of expected snapshots (active prompts multiplied by active models) and at least 10 snapshots absolute. This prevents the detector from flagging apparent "visibility drops" that are actually just incomplete data collection.
Confidence Quality Rating
A High, Medium, or Low signal attached to every visibility forecast. Derived from cross-validated prediction error, coverage probability of prediction intervals, and the Ljung-Box residual autocorrelation test. The confidence quality rating tells you how much weight to place on the forecast's point estimate and interval bounds.
Connector
A third-party integration that extends AEO Optima's data collection and automation capabilities. Supported connectors include Serper (SERP data), DataForSEO (search analytics), Slack (notifications), Looker (BI dashboards), Zapier (workflow automation), Shopify (e-commerce data), Bing, Google Knowledge Graph, Reddit, Wikipedia, and WordPress.
Coverage Report
An analysis generated by the Query Universe system showing how thoroughly your building blocks and prompts cover your brand's query space. Coverage reports include dimension distributions, gap identification, and specific recommendations for expanding monitoring coverage into underrepresented areas.
D
Domain Authority (AI Citation)
A measure of how frequently a domain is cited by AI engines as a source. Calculated from citation frequency, recency, and cross-model presence. Higher domain authority means AI models trust and reference that source more often — making it a valuable signal for identifying which sources to target in your citation-building strategy.
E
Entity Clarity
A score from 0 to 100 that measures how clearly AI engines understand your brand's identity and attributes. Calculated by extracting brand attributes mentioned in AI responses and verifying them against your brand facts. Higher entity clarity means AI engines have accurate, detailed, and consistent knowledge of who your brand is and what it offers.
ETAS (Entity & Topical Authority Score)
One of AEO Optima's 6 proprietary intelligence scores. ETAS measures how strongly AI engines associate your brand with specific topics, products, or expertise areas. A high ETAS indicates that AI models recognize your brand as an authority in its domain — they recommend you for relevant queries and accurately describe your areas of expertise.
G
GEO (Generative Engine Optimization)
The practice of optimizing web content specifically for generative AI engines to cite and reference. While AEO is the broader discipline of brand visibility in AI answers, GEO focuses on the technical and content-level factors that make individual pages more likely to be cited. AEO Optima's GEO Audit analyzes pages across schema markup, entity clarity, FAQ structure, content depth, technical SEO, and freshness to produce a readiness score with actionable recommendations.
GEO Audit
A multi-dimensional assessment of a web page's readiness for AI citation. The audit evaluates schema markup, entity clarity, FAQ structure, content depth, technical SEO signals, and content freshness. Each dimension receives a score, and the combined result indicates how likely AI models are to cite the page. Includes specific, actionable recommendations for improving each dimension.
H
Hallucination
An AI-generated statement about your brand that is factually incorrect. Detected by comparing AI responses against your verified brand facts. Examples include wrong founding dates, incorrect product descriptions, or fabricated statistics. Hallucinations can be flagged via the correction submission workflow to notify AI providers of inaccuracies in their models.
Holt-Winters Ensemble
The statistical forecasting engine used by AEO Optima's visibility forecaster. Trains three competing models — Holt-Winters additive (level + trend + weekly seasonality), Holt damped trend, and Seasonal Naive — grid-searches parameter combinations optimized by AICc, then ensembles them weighted by expanding-window time-series cross-validation RMSE. Produces point forecasts plus bootstrap-calibrated 95% prediction intervals and a confidence quality rating.
I
Intelligence Engine
One of AEO Optima's 5 daily computation engines that process captured snapshot data into actionable intelligence. The five engines are: citation impact analysis (tracing source influence), competitor trajectory tracking (monitoring competitive movement), prompt decomposition (understanding query patterns), error root-cause detection (identifying why visibility drops), and leading indicator identification (predicting future changes before they appear in headline metrics).
Intelligence Scores
AEO Optima's 6 proprietary brand health metrics that measure distinct dimensions of AI visibility. The six scores are: BNCI (narrative coherence), CMCS (cross-model consistency), MEI (market volatility), SDI (sentiment stability), CIPS (citation quality), and ETAS (topical authority). No other AEO tool provides these metrics. See the AEO Tools Comparison for context.
L
Leading Indicator
A metric or signal that predicts future changes in AI visibility before they appear in headline metrics like mention rate or visibility score. AEO Optima's leading indicator engine identifies early warning signals — such as shifts in citation patterns, changes in competitor mention frequency, or sentiment trend inflections — so you can act proactively rather than reactively.
M
MCP (Model Context Protocol)
An open standard for connecting AI assistants to external tools and data sources. AEO Optima provides 74 MCP tools that allow AI clients — including Claude Desktop, Claude Code, ChatGPT, Cursor, VS Code Copilot, Windsurf, Gemini, and Amazon Q — to directly query visibility data, run analytics, capture snapshots, and generate reports. See the MCP API reference for details.
MEI (Market Entropy Index)
One of AEO Optima's 6 proprietary intelligence scores. MEI measures the competitive volatility in your brand's AI answer landscape. High entropy means AI engines frequently change which brands they recommend — indicating a fluid, competitive market where optimization efforts can shift rankings. Low entropy indicates entrenched positions that require different strategies to displace.
Mention Rate
The percentage of monitored prompts where at least one AI engine mentions your brand in its response. Distinct from visibility score in that mention rate can be calculated per-model, per-segment, or per-time-period, while visibility score is the primary aggregate metric shown on the dashboard.
Milestone
A defined checkpoint within a goal-based optimization plan. Milestones represent intermediate targets on the path to a visibility goal (e.g., "reach 40% mention rate" as a milestone toward a 60% goal). Each milestone includes a target metric value and a deadline, with pace indicators showing whether you are on track, ahead, or behind.
N
Non-Branded Prompt
A prompt segment classification for queries that mention neither your brand nor any competitor (e.g., "Best project management tool for remote teams" or "Top CRM for small business"). Non-branded prompts are the most important segment for measuring true organic AI discoverability — they reveal how often AI recommends your brand when users are not specifically asking about you.
P
Pace Status
An indicator within goal tracking that shows whether progress toward a visibility milestone is on track, ahead of schedule, or behind schedule. Calculated by comparing actual metric improvement against the expected trajectory based on milestone dates and target values. Pace status helps teams prioritize optimization efforts on goals that need attention.
Persistent Anomaly
An anomaly where both the flagged data point and the immediately preceding point exceed the z-score threshold. A single spike is often noise caused by data variability; two consecutive anomalous points indicate a genuine trend change. Use the persistent flag to filter out one-off events in alert rules and focus on meaningful shifts.
Prompt Segment
One of three classifications automatically assigned to every monitoring prompt: Branded, Non-Branded, or Competitor. Auto-detected from brand name, competitor names, and "vs" comparison heuristics, with manual override available. All analytics surfaces in AEO Optima accept a segment filter so you can view any metric for any slice of your prompt portfolio.
Q
Query Universe
A systematic framework for generating, organizing, and managing all possible brand-related queries that AI models might encounter. The Query Universe uses building blocks — reusable template components categorized by type — combined by a prompt composer to ensure comprehensive monitoring coverage across topics, intents, audiences, geographies, and modifiers. Coverage reports identify gaps in your monitoring. See Core Concepts for a guided introduction.
R
Rank Position
The position where your brand appears in a numbered or ordered list within an AI-generated response. A rank of 1 means your brand was mentioned first — the strongest position. Not all AI responses contain ranked lists; this metric only applies when the AI model organizes its answer as an ordered recommendation. Tracking rank position over time reveals whether your brand is moving up or down in AI-generated rankings.
S
SDI (Sentiment Drift Index)
One of AEO Optima's 6 proprietary intelligence scores. SDI tracks how AI sentiment toward your brand changes over time. A stable SDI indicates consistent positive or neutral perception across AI platforms. A volatile SDI signals shifting AI opinion — possibly driven by new training data, recent news, or competitor activity — that may require attention before it becomes a reputation problem.
Sentiment
The overall tone an AI model uses when describing your brand in its response. Classified as Positive (favorable language, recommendations, endorsements), Neutral (factual mention without strong opinion), or Negative (critical language, warnings, unfavorable comparisons). Sentiment is tracked per-snapshot and aggregated over time to reveal trends in how AI models perceive your brand.
Share of Voice
Your brand's proportion of total brand mentions across all AI responses in your monitoring set. If AI engines mention 5 brands across your prompts and your brand appears in 40% of those mentions, your Share of Voice is 40%. This competitive metric reveals your relative AI presence compared to competitors within your category.
Snapshot
A single point-in-time capture of an AI model's response to one of your monitoring prompts. Each snapshot records the full response text, whether your brand was mentioned, sentiment classification, rank position, citation sources, token usage, and cost. Snapshots are the fundamental data unit in AEO Optima — all analytics, intelligence scores, and forecasts are derived from snapshot data.
V
Visibility Forecasting
Statistically rigorous prediction of future AI visibility trends using the Holt-Winters Ensemble engine. Forecasts include point estimates, bootstrap-calibrated 95% prediction intervals, confidence quality ratings, and diagnostic information. Requires a minimum of 14 data points for trend models and 21 days for seasonal models. Forecasting helps teams set realistic goals and anticipate visibility changes before they occur.
Visibility Score
The percentage of your monitored prompts where at least one AI engine mentions your brand. A visibility score of 75% means your brand appears in AI responses for three out of every four prompts you are tracking. This is the primary headline metric on the AEO Optima dashboard and the most commonly used measure of overall AI brand presence. Distinct from mention rate, which can be sliced by model, segment, or time period.
Related Resources
- What is AEO? — Understanding answer engine optimization and why it matters
- Core Concepts — A guided introduction to the foundational ideas behind AEO Optima
- AEO Tools Comparison — How AEO Optima compares to other AI visibility approaches
- Supported AI Models — Details on the AI providers and models available for monitoring
- Best Practices — Strategies for improving your AI visibility metrics