Best Practices
Proven strategies for writing effective prompts, improving AI visibility, and managing multiple projects in AEO Optima.
Overview
Getting value from AEO Optima depends on how well you set up your monitoring and how you act on the data. This guide covers field-tested practices organized by category to help you get the most accurate insights and drive real improvements in your AI visibility.
Prompt Writing
The quality of your prompts directly determines the quality of your visibility data. Poorly written prompts produce misleading metrics. Well-crafted prompts mirror real user behavior and give you actionable intelligence.
Write Like Real Users
AI answer engines receive natural language questions from real people. Your prompts should match that style.
| Instead of This | Write This |
|---|---|
| "Best CRM software" | "What is the best CRM software for small e-commerce businesses?" |
| "Project management tools comparison" | "How does Asana compare to Monday.com for remote teams?" |
| "Email marketing" | "Can you recommend an email marketing platform that integrates with Shopify?" |
Prompts phrased as complete questions produce more realistic and useful AI responses than keyword-style queries.
Mix Broad and Specific
Use a combination of prompt types to get a complete picture of your visibility landscape.
- Broad / Discovery prompts — "What are the best tools for [your category]?" These reveal whether AI engines know about your brand at all.
- Specific / Recommendation prompts — "Can you recommend a [your product type] for [specific use case]?" These test whether AI recommends you for your target audience.
- Comparison prompts — "How does [Your Brand] compare to [Competitor]?" These reveal how AI positions you relative to competitors.
- Problem-solving prompts — "What is the best way to [solve a problem your product addresses]?" These test whether AI suggests your product as a solution.
Tip: Aim for a ratio of roughly 40% broad, 30% specific, 20% comparison, and 10% problem-solving prompts. Adjust as you learn which categories yield the most useful data.
Include Competitor Comparisons
Comparison prompts are among the most valuable because they directly reveal how AI engines rank you against competitors. Include at least 2-3 comparison prompts for your top competitors.
Examples:
- "How does [Your Brand] compare to [Competitor A] for [use case]?"
- "What are the pros and cons of [Your Brand] versus [Competitor B]?"
- "[Your Brand] vs [Competitor C] — which is better for [audience]?"
Update Prompts Periodically
AI models evolve, and so does the way people ask questions. Review your prompt list monthly and:
- Remove prompts that are no longer relevant to your market
- Add new prompts based on keywords discovered in the Insights page
- Refresh the wording of existing prompts to match current search trends
- Add prompts for new products, features, or use cases you have launched
Improving AI Visibility
Monitoring is only half the equation. Use these strategies to actively improve how AI engines represent your brand.
Understand AEO Scores
The Page Optimization tool assigns a score from 0 to 100 for any URL. Higher scores correlate with a greater likelihood of being cited in AI-generated responses.
- 80-100: Excellent. The page is well-structured for AI citation.
- 60-79: Good. Minor improvements could increase citation likelihood.
- 40-59: Fair. Significant optimization opportunities exist.
- Below 40: Needs attention. The page is unlikely to be cited by AI engines in its current form.
Focus your optimization efforts on pages that are closest to the next tier — a page scoring 58 may only need a few changes to reach 60+.
Use Structured Data
AI models parse structured data more effectively than unstructured text. Implement these schema types where appropriate:
- FAQ Schema — Mark up frequently asked questions on your pages. AI models often pull directly from FAQ markup when answering user questions.
- HowTo Schema — Step-by-step guides with HowTo schema are highly citable for procedural queries.
- Product Schema — Helps AI models understand product attributes, pricing, and reviews.
- Organization Schema — Establishes your brand identity and authority in AI knowledge bases.
Write Clear, Factual Content
AI models prioritize content that is:
- Factual and specific — Include concrete numbers, dates, and verifiable claims. "Trusted by 10,000+ teams" is more citable than "Trusted by many teams."
- Well-structured — Use clear headings, bullet points, and short paragraphs. AI models extract information from well-organized content more reliably.
- Authoritative — Back up claims with data, case studies, or third-party references. AI models weigh authoritative sources more heavily.
- Current — Regularly updated content signals freshness. AI models prefer recent information over stale pages.
Tip: Write content as if you were briefing a journalist. Clear, factual, well-organized information is exactly what AI models look for when generating answers.
Build Third-Party Mentions
AI models learn about brands from across the web, not just from your own website. Increase your brand's presence in:
- Industry publications and blogs — Guest posts, thought leadership articles, and press coverage all feed into AI training data.
- Review platforms — Reviews on G2, Capterra, Trustpilot, and similar platforms are frequently cited by AI models.
- Wikipedia and knowledge bases — If your brand is notable enough, a Wikipedia article significantly boosts AI recognition.
- Social media and forums — Active presence on platforms like Reddit, Stack Overflow, and industry forums contributes to brand recognition.
Monitor and Iterate
Improving AI visibility is an ongoing process, not a one-time project.
- Capture baseline snapshots before making changes
- Implement content or structural improvements
- Capture new snapshots after changes propagate (allow 48-72 hours)
- Compare before and after metrics
- Double down on what works, adjust what does not
Managing Multiple Projects
AEO Optima supports tracking multiple brands, products, or business units within a single organization. Here is how to manage them effectively.
Create Separate Projects
Each distinct brand or product line should have its own project. This keeps prompts, configurations, schedules, and analytics cleanly separated.
Examples of when to create separate projects:
- Different brands within a portfolio company
- Different product lines with distinct audiences
- Different geographic markets with localized prompts
- Client accounts (for agencies managing multiple brands)
Use the Project Selector
The project selector in the sidebar lets you switch between projects instantly. All pages — Dashboard, Prompts, Snapshots, Analytics — automatically show data for the selected project.
Tip: When reviewing multiple projects, work through them sequentially during your weekly review rather than switching back and forth. This reduces context-switching and helps you focus on each brand's unique trends.
Keep Configurations Independent
Each project maintains its own:
- Prompt list — Tailored to the specific brand or product
- LLM configuration — Which AI models to monitor (may vary by market or budget)
- Schedules — Capture frequency appropriate for each brand's monitoring needs
- Competitor list — Relevant competitors for that specific market
This independence ensures that changes to one project never affect another.
Naming Conventions
Use clear, consistent names for projects to make the sidebar easy to navigate:
[Brand Name] — [Market]for multi-market brands (e.g., "Acme Corp — US", "Acme Corp — EU")[Client Name] — [Product]for agencies (e.g., "ClientCo — SaaS Platform", "ClientCo — Mobile App")
Summary
| Category | Key Takeaway |
|---|---|
| Prompts | Write natural questions that mirror real user behavior |
| Visibility | Structured data, factual content, and third-party mentions drive AI citations |
| Projects | Separate projects per brand with independent configurations |
| Iteration | Monitor, measure, improve, repeat |