AEO Optima Docs
Reference

MCP API Reference

Connect AI assistants like Claude, ChatGPT, Cursor, and more to AEO Optima via the Model Context Protocol (MCP).

Overview

AEO Optima exposes a Model Context Protocol (MCP) server that allows AI assistants to access your brand visibility data, capture snapshots, run analytics, and more — all without leaving your AI tool.

Server URL: https://aeo-optima-mcp.onrender.com/mcp

Transport: Streamable HTTP (JSON-RPC over HTTP POST)

Authentication: Bearer token or OAuth 2.1


Getting Started

There are two ways to connect your AI client to AEO Optima:

  1. Log in to the AEO Optima dashboard
  2. Go to Settings > MCP / API
  3. Click Generate New Token
  4. Give it a name (e.g., "Claude Desktop") and select a role cap
  5. Copy the token immediately — it's shown only once
  6. Paste the token into your AI client's configuration (see Client Configuration below)

Option B: OAuth 2.1 (Click-to-connect)

Some AI clients (like Claude Desktop) support OAuth, which lets you connect with a single click — no token copying needed. When you click "Connect" in your AI client:

  1. A browser window opens with the AEO Optima login page
  2. You log in (or are already logged in)
  3. A consent screen shows what the AI client is requesting
  4. You pick your organization and approve
  5. You're redirected back — connection is automatic

OAuth discovery endpoints:

  • Authorization Server Metadata: https://aeo-optima.vercel.app/.well-known/oauth-authorization-server
  • Protected Resource Metadata: https://aeo-optima-mcp.onrender.com/.well-known/oauth-protected-resource

OAuth supports PKCE (required), Dynamic Client Registration, refresh tokens, and token revocation.


Client Configuration

Replace aeo_YOUR_TOKEN with the token you generated from Settings > MCP / API.

Claude Desktop

Go to Settings > Connectors > Add Remote MCP Server and enter:

  • Name: aeo-optima
  • URL: https://aeo-optima-mcp.onrender.com/mcp
  • Authorization Token: Your aeo_... token

Claude Desktop also supports OAuth — it will auto-discover the authorization server and guide you through the consent flow.

Claude Code (CLI)

claude mcp add aeo-optima --transport http \
  https://aeo-optima-mcp.onrender.com/mcp \
  --header "Authorization: Bearer aeo_YOUR_TOKEN"

Cursor

Create or edit .cursor/mcp.json in your project root:

{
  "mcpServers": {
    "aeo-optima": {
      "url": "https://aeo-optima-mcp.onrender.com/mcp",
      "headers": {
        "Authorization": "Bearer aeo_YOUR_TOKEN"
      }
    }
  }
}

Windsurf

Edit ~/.codeium/windsurf/mcp_config.json:

{
  "mcpServers": {
    "aeo-optima": {
      "url": "https://aeo-optima-mcp.onrender.com/mcp",
      "headers": {
        "Authorization": "Bearer aeo_YOUR_TOKEN"
      }
    }
  }
}

VS Code + Copilot

In VS Code 1.99+, go to Settings > MCP Servers and add:

  • Type: http
  • URL: https://aeo-optima-mcp.onrender.com/mcp
  • Headers: Authorization: Bearer aeo_YOUR_TOKEN

ChatGPT / OpenAI

In Developer Mode, go to MCP Servers and add the server URL with your Bearer token.

Google Gemini

Google Gemini supports MCP at the SDK level. Add AEO Optima as a remote MCP server with the server URL and Bearer token.

Amazon Q Developer

In your CLI config or IDE plugin settings, add the MCP server URL with your Bearer token as a header.

OpenAI Agents SDK (Python)

from agents import MCPServerStreamableHttp
 
mcp = MCPServerStreamableHttp(
    url="https://aeo-optima-mcp.onrender.com/mcp",
    headers={"Authorization": "Bearer aeo_YOUR_TOKEN"}
)

Anthropic API (Direct)

Use the mcp_servers parameter in the Messages API:

{
  "mcp_servers": [{
    "type": "url",
    "url": "https://aeo-optima-mcp.onrender.com/mcp",
    "name": "aeo-optima",
    "authorization_token": "aeo_YOUR_TOKEN"
  }]
}

Authentication

AEO Optima supports two authentication methods. Both produce a Bearer token that is sent with every MCP request.

Method 1: Manual Tokens

Generated from the Settings page. Format: aeo_ + 48 random hex characters (52 characters total).

  • SHA-256 hashed before storage — plaintext is never stored
  • Optional expiration dates for time-limited access
  • Can be revoked at any time from the Settings page
  • Scoped to a specific user + organization

Method 2: OAuth 2.1

OAuth tokens are generated automatically through the browser-based consent flow. Format: oat_ + 48 random hex characters.

OAuth supports:

  • Authorization Code + PKCE (S256 only) — no client secrets needed
  • Dynamic Client Registration (RFC 7591) — AI clients auto-register
  • Refresh Tokens — access tokens auto-renew without re-authorization
  • Token Revocation (RFC 7009) — revoke access or refresh tokens
  • Scopes: mcp:tools, mcp:resources, mcp:prompts

Role Hierarchy

Both token types have a role cap that limits what they can do, regardless of the user's actual role:

RoleRead DataWrite DataView CostsAdmin Tools
viewerProjects, snapshots, analytics, prompts, modelsNoNoNo
memberEverything a viewer can + usage/cost dataCapture snapshots, create/update prompts, analyze pagesYesNo
adminEverythingEverything a member canYesNo
ownerEverythingEverythingYesNo

The effective permission is always the minimum of the token's role cap and the user's actual organization role.


Tools (65)

The MCP server exposes 74 tools across 29 categories. Tools that require a feature flag (Advanced Analytics, GEO Audit, GA4, GSC, Webhooks, Connectors, Citations, Crawlers, Content, Predictive, Enterprise, Query Universe) only work for organizations whose plan includes the corresponding feature.

Projects (2)

ToolDescriptionMin Role
list_projectsList all projects in your organizationviewer
get_projectGet detailed project info (brand, competitors, LLM configs)viewer

Snapshots (3)

ToolDescriptionMin Role
get_snapshotsRetrieve snapshots with filters (date, model, sentiment, brand mention)viewer
get_snapshot_detailGet full AI response text and analysis for a snapshotviewer
capture_snapshotCapture new AI responses for a prompt across models (rate limit: 10/hr)member

Analytics (4)

ToolDescriptionMin Role
get_dashboard_metricsKPI summary: visibility %, sentiment, rank, week-over-week changesviewer
get_analyticsVisibility trends, LLM comparison, prompt performanceviewer
get_usage_metricsToken usage and cost breakdown by provider, model, daymember
get_sentiment_breakdownSentiment analysis by prompt and modelviewer

Advanced Analytics (3) — ai_insights_advanced feature flag

ToolDescriptionMin Role
get_entity_analysisBrand attribute extraction from snapshots, clarity scoring (0-100), verified against brand facts. Accepts optional segment parameter.viewer
get_shopping_visibilityShopping keyword detection, position tracking, price accuracy, competitor analysis. Accepts optional segment parameter.viewer
get_multi_language_analysisPer-language visibility, character-range detection (CJK/Arabic/Cyrillic), localized recommendations. Accepts optional segment parameter.viewer

Segment filtering: All analytics tools — get_dashboard_metrics, get_analytics, get_sentiment_breakdown, get_entity_analysis, get_shopping_visibility, get_multi_language_analysis, get_visibility_forecast, detect_anomalies, analyze_citation_gaps, and generate_report — accept an optional segment parameter (all, branded, non-branded, or competitor) to filter by prompt type. All REST API analytics endpoints also accept ?segment= as a query parameter.

Prompts (3)

ToolDescriptionMin Role
list_promptsList monitoring prompts for a project (includes topic)viewer
create_promptCreate a new monitoring promptmember
update_promptUpdate prompt text, type, topic, priority, or active statusmember

Page Analysis (1)

ToolDescriptionMin Role
analyze_pageAnalyze a URL for AEO score (0-100) across 6 categories with improvement suggestionsmember

Models (1)

ToolDescriptionMin Role
list_modelsList available AI models from the dynamic registry, optionally filtered by providerviewer

Alerts (1)

ToolDescriptionMin Role
get_alertsCheck for visibility drops, sentiment shifts, capture failures, rank changesviewer

AI Analysis (2)

ToolDescriptionMin Role
run_analysisRun AI-powered analysis: sentiment_drivers, content_gaps, opportunity_scoring, comprehensivemember
get_analysis_resultsRetrieve completed AI analysis results for a projectviewer

Actions (2)

ToolDescriptionMin Role
list_actionsList insight actions (auto-generated from AI analysis or manual)viewer
update_actionUpdate an action's status (pending, in_progress, completed, dismissed) or notesmember

Plan & Quotas (1)

ToolDescriptionMin Role
get_plan_infoGet organization plan, feature flags, quota limits, and current usage countsviewer

GEO Audit (2)

ToolDescriptionMin Role
run_geo_auditRun a Generative Engine Optimization audit on a URL — scores schema, entity clarity, FAQ structure, content depth, technical SEO, freshnessmember
list_geo_auditsList past GEO audit results for a projectviewer

GA4 (2)

ToolDescriptionMin Role
get_ai_trafficGet AI referral traffic data from Google Analytics 4 (sessions, users, pageviews from ChatGPT, Perplexity, Claude, Gemini)viewer
get_ga4_statusCheck GA4 connection status for a projectviewer

GSC (2)

ToolDescriptionMin Role
get_search_performanceGet Google Search Console data (clicks, impressions, CTR, position) for top queriesviewer
get_gsc_statusCheck Google Search Console connection status for a projectviewer

Webhooks (2)

ToolDescriptionMin Role
list_webhooksList webhook endpoints registered for the organizationadmin
get_webhook_deliveriesGet recent delivery logs for a webhook endpointadmin

Citation Intelligence (3)

ToolDescriptionMin Role
get_citationsGet citation sources for a project with aggregated counts and category breakdownsviewer
analyze_citation_gapsIdentify sources that cite competitors but not your brand — actionable outreach targetsmember
get_domain_authorityGet domain authority scores for citation sources (frequency, recency, cross-model presence)viewer

Crawler Intelligence (2)

ToolDescriptionMin Role
get_crawler_dashboardAI bot monitoring dashboard — activity logs, blocked/allowed status, detected patternsviewer
analyze_robotsAnalyze robots.txt and ai.txt for AI bot configuration with recommendationsviewer

Content Intelligence (3)

ToolDescriptionMin Role
generate_faqsGenerate FAQ content from project prompts and brand facts (suitable for FAQ schema)member
generate_schema_markupGenerate JSON-LD structured data for a URL using brand facts and page contentmember
get_correctionsGet hallucination correction submissions (drafts, submitted, resolved)viewer

Connectors (2)

ToolDescriptionMin Role
list_connectorsList registered connectors (Serper, DataForSEO, Slack, Looker, Zapier, Shopify)admin
manage_connectorCreate, test, sync, or delete a connectoradmin

Predictive & Edge (3)

ToolDescriptionMin Role
get_visibility_forecastVisibility forecast via Holt-Winters ensemble (level + trend + weekly seasonality) with bootstrap-calibrated 95% prediction intervals. Returns winning model, cross-validated RMSE/MAE/MAPE, coverage probability, Ljung-Box residual test, and confidence quality rating. Accepts optional segment parameter.viewer
detect_anomaliesCompleteness-aware z-score anomaly detection on visibility, sentiment, and mention-rate metrics. Skips partial-capture days (≥80% completeness + ≥10 snapshots required), excludes today, applies Bonferroni correction for 3 simultaneous tests, marks persistent when 2+ consecutive points anomalous. Accepts optional segment parameter.viewer
get_benchmarksCompare project visibility against industry benchmarks (percentile rank)viewer

Enterprise (2)

ToolDescriptionMin Role
get_audit_logsSOC 2 compliance audit logs for an organization (user actions, data access, config changes)admin
get_revenue_attributionMulti-touch revenue attribution: first_touch, last_touch, linear, time_decay, position_basedviewer

Query Universe (6)

ToolDescriptionMin Role
list_building_blocksList building blocks for a project grouped by category (Core services, Modifiers)viewer
manage_building_blocksCreate, update, or delete building blocksmember
compose_promptsGenerate prompt suggestions from building blocks (cap: 200)member
get_coverage_reportGet or regenerate Query Universe coverage report (dimension distributions, gaps, recommendations)viewer
seed_building_blocksSeed building blocks from industry templatesmember
backfill_promptsEnrich existing prompts with enhanced classification (intent, journey stage, freshness, risk, SDS tier)member

Reports (6)

ToolDescriptionMin Role
generate_reportGenerate an AEO report for a project. Supports multiple formats (PDF, slide PDF, Excel, CSV) and report types (executive, standard, comprehensive, competitive). Returns report ID, download URL, AI brand score, and letter grade.member
get_report_historyGet report generation history for a project. Returns past reports with metadata, scores, and download URLs.viewer
get_report_downloadGet a signed download URL for a specific report. URL expires after 1 hour.viewer
create_report_shareCreate a shareable link for a report. Supports optional password protection, expiry, and comment permissions.member
list_report_sharesList all shared report links for a project, including view counts and status.member
revoke_report_shareRevoke (deactivate) a shared report link. The link will no longer be accessible.admin

Goals (3)

ToolDescriptionMin Role
list_goalsList visibility goals with milestones and pace status. Filter by status or segment.viewer
create_goalCreate a goal with auto-computed milestones. Supports 7 metrics across 4 segments. Returns feasibility assessment.member
update_goalUpdate a goal's target, date, status, or notes.member

Insights (2)

ToolDescriptionMin Role
list_insightsList intelligence insights generated by computation engines. Filter by type (11 types), severity, or status.viewer
update_insightAcknowledge, dismiss, or convert an insight to an action.member

Intelligence (3)

ToolDescriptionMin Role
get_intelligence_scoresGet all 6 intelligence scores: BNCI, CMCS, MEI, SDI, CIPS, ETAS.viewer
get_intelligence_summaryGet unified intelligence summary with KPIs, timeline, recommendations, and action counts.viewer
verify_actionTrigger scoped measurement for completed actions. Compares visibility before/after to measure real impact.member

Quick Audit (1)

ToolDescriptionMin Role
quick_auditRun a 3-model brand check (ChatGPT, Claude, Gemini). Returns which models mention the brand with excerpts.viewer

Admin — Customer (2)

ToolDescriptionMin Role
get_health_reportSystem health check across all platform componentsplatform_admin
get_platform_statsPlatform-wide statistics (orgs, users, projects, snapshots, costs)platform_admin

Admin — Platform (5)

ToolDescriptionMin Role
get_cockpit_overviewPlatform cockpit dashboard — balance, runway, worker health, cost anomaliesplatform_admin
manage_org_creditsAllocate or adjust credits for a customer organizationplatform_admin
pause_orgPause or resume a customer organization's capture accessplatform_admin
get_cost_intelligenceGranular cost breakdown by org, project, user, provider, or time periodplatform_admin
manage_killswitchesEnable or disable platform-wide killswitches for capture, analysis, or emailplatform_admin

All admin tools are only available to users whose email is listed in the PLATFORM_ADMIN_EMAILS environment variable.


Resources (10)

Resources are read-only data endpoints your AI assistant can browse for context. Two are static; the other eight are templated by ID.

URIDescription
aeo://projectsList of all projects in your organization
aeo://modelsAll available AI models grouped by provider
aeo://projects/{id}/summaryProject summary with key metrics
aeo://projects/{id}/snapshots/recentLast 10 snapshots for a project
aeo://projects/{id}/competitorsCompetitor list for a project
aeo://projects/{id}/geo-auditsRecent GEO audit results for a project
aeo://projects/{id}/ai-trafficAI referral traffic summary for a project
aeo://projects/{id}/actionsPending insight actions for a project
aeo://organizations/{id}/planCurrent plan, limits, and feature flags
aeo://models/{provider}Models from a specific provider

Prompts (6)

Prompt templates generate structured reports from your data.

PromptDescriptionArguments
weekly_reportWeekly AI visibility report with trends, highlights, recommendationsproject_id
competitor_analysisCompetitor comparison: share of voice, overlap, rankingsproject_id, days?
content_recommendationsContent improvement suggestions from snapshot analysesproject_id, limit?
visibility_summaryQuick current-state summary with today's metrics and alertsproject_id
geo_optimizationGEO optimization plan based on the latest audit resultsproject_id
ai_traffic_analysisAnalyze AI referral traffic trends with growth strategiesproject_id, days?

Rate Limits

LimitScope
60 requests / minutePer token, all tools
1,000 requests / hourPer token, all tools
10 captures / hourPer token, capture_snapshot only

When rate limited, the tool returns an error with a retryAfter value in seconds.


Error Handling

Tool errors are returned as content with isError: true:

{
  "content": [{ "type": "text", "text": "{\"error\": \"...\", \"statusCode\": 401}" }],
  "isError": true
}
Status CodeMeaning
401Missing, invalid, expired, or revoked token
403Insufficient role or cross-org access denied
404Resource not found
429Rate limit exceeded
500Internal server error

Audit Logging

Every tool call is logged for security and usage tracking:

  • Who: Token, user, organization
  • What: Tool name, input parameters (sanitized)
  • When: Timestamp and duration
  • Result: Success or error

Audit logs are viewable by organization admins.


Example Conversations

Check Mention Rate & Visibility Score

You: How is my brand doing on AI search engines today?

AI (calls list_projects then get_dashboard_metrics): Your brand "TechShu" has a 30% mention rate and a Visibility Score of 49/100 across AI search engines today, with a sentiment score of 72%. Average rank position is 3.2, up from 3.5 last week.

Capture a Snapshot

You: Run a snapshot for my "best CRM software" prompt

AI (calls list_prompts to find the prompt, then capture_snapshot): Captured responses from 8 AI models. Your brand was mentioned in 3 out of 8 responses. Sentiment was positive in 2 and neutral in 1.

Weekly Report

You: Generate my weekly visibility report

AI (invokes weekly_report prompt): Here's your weekly report for TechShu AEO Tracking...


OAuth 2.1 Developer Reference

If you're building an MCP client or integration that needs OAuth (rather than static tokens), here are the technical details.

Discovery

Your client should first fetch the Protected Resource Metadata to find the authorization server:

GET https://aeo-optima-mcp.onrender.com/.well-known/oauth-protected-resource

Then fetch the Authorization Server Metadata:

GET https://aeo-optima.vercel.app/.well-known/oauth-authorization-server

Dynamic Client Registration

Register your client automatically (RFC 7591):

POST https://aeo-optima.vercel.app/api/mcp/oauth/register
Content-Type: application/json

{
  "client_name": "My AI Tool",
  "redirect_uris": ["http://localhost:3000/callback"],
  "grant_types": ["authorization_code"],
  "response_types": ["code"],
  "token_endpoint_auth_method": "none"
}

Redirect URIs must be either localhost (any port) or HTTPS.

Authorization Flow

  1. Generate a PKCE code verifier (43-128 character random string) and its S256 challenge
  2. Redirect the user to the authorization endpoint:
GET https://aeo-optima.vercel.app/api/mcp/oauth/authorize
  ?response_type=code
  &client_id=mcp_YOUR_CLIENT_ID
  &redirect_uri=http://localhost:3000/callback
  &code_challenge=BASE64URL_S256_HASH
  &code_challenge_method=S256
  &scope=mcp:tools mcp:resources mcp:prompts
  1. User logs in, sees the consent screen, picks their organization and role cap, and approves
  2. User is redirected to your redirect_uri with ?code=AUTH_CODE

Token Exchange

Exchange the authorization code for tokens:

POST https://aeo-optima.vercel.app/api/mcp/oauth/token
Content-Type: application/x-www-form-urlencoded

grant_type=authorization_code
&code=AUTH_CODE
&client_id=mcp_YOUR_CLIENT_ID
&redirect_uri=http://localhost:3000/callback
&code_verifier=YOUR_ORIGINAL_VERIFIER

Response:

{
  "access_token": "oat_...",
  "token_type": "bearer",
  "expires_in": 3600,
  "refresh_token": "ort_...",
  "scope": "mcp:tools mcp:resources mcp:prompts"
}

Refresh Tokens

When the access token expires, use the refresh token to get a new one:

POST https://aeo-optima.vercel.app/api/mcp/oauth/token
Content-Type: application/x-www-form-urlencoded

grant_type=refresh_token
&refresh_token=ort_YOUR_REFRESH_TOKEN
&client_id=mcp_YOUR_CLIENT_ID

Token Revocation

Revoke an access token or refresh token (RFC 7009):

POST https://aeo-optima.vercel.app/api/mcp/oauth/revoke
Content-Type: application/x-www-form-urlencoded

token=oat_OR_ort_TOKEN

Always returns HTTP 200, regardless of whether the token was valid.

OAuth Scopes

ScopeWhat It Grants
mcp:toolsAccess to all 65 MCP tools
mcp:resourcesAccess to all 10 MCP resources
mcp:promptsAccess to all 6 MCP prompt templates

All three scopes are granted by default if no scope is specified.


Compatibility

PlatformAuth MethodsStatus
Claude DesktopBearer token, OAuthSupported
Claude Code (CLI)Bearer tokenSupported
ChatGPT / OpenAIBearer token, OAuthSupported
CursorBearer tokenSupported (all 74 tools)
WindsurfBearer tokenSupported
VS Code + CopilotBearer token, OAuthSupported (v1.99+)
Google GeminiBearer token, OAuthSupported (SDK-level)
Amazon Q DeveloperBearer tokenSupported
OpenAI Agents SDKBearer tokenSupported
Anthropic APIBearer tokenSupported