You Now Have an AI Visibility Expert on Call. Here’s What to Ask It.

You Now Have an AI Visibility Expert on Call. Here's What to Ask It.
Before the MCP connection, the workflow looked like this: open a dashboard, read a number, copy it into a doc, spend an hour trying to figure out why it moved, then make a guess about what to do next.
After the MCP connection, that whole process collapses into a conversation. Your LLM: Claude, Cursor, whatever you use, can see your AI visibility score, citation trends, sentiment data, and competitor gaps in real time. It can explain why a number changed. It can suggest what to write to fix it. It can publish the article when you're done. All from the same chat window.
This isn't automation for its own sake. It's the difference between reading a report and having a strategist in the room who's already read it.
The questions you can ask right now
Here's what a real session looks like, using Reaudit connected to Claude Desktop via the MCP server.
1. "Show me my AI visibility score"
Ask: "What is my current AI visibility score in 3dplotter and how has it moved over the past four weeks?"
What comes back: your overall score, a breakdown by platform, and a trend flag showing which engine is pulling the number up or down. If the score dropped, you know where to look before you ask the next question.
2. "Which queries are actually driving my brand mentions?"
Ask: "List the top queries that generated brand mentions this month, with sentiment and source engine."
What comes back: the specific prompts people were asking AI engines when your brand appeared, which platforms surfaced you, and the sentiment attached to each mention. Most brands discover their mentions are clustered around a narrow set of queries — often comparison and alternative searches rather than direct brand searches. Knowing that changes what you write.
3. "Compare me to my top competitor — where am I winning and where am I losing?"
Ask: "Give me a side-by-side comparison with [competitor] on citation count, sentiment, and visibility score."
What comes back: the gap in raw numbers, which engine the gap is largest on, and where you're actually ahead. The LLM can then connect the citation gap to specific content themes — so you're not just looking at a number, you're looking at a reason.
4. "What content should I create based on where I'm losing?"
Ask: "Based on my citation gap, what article topics would close it?"
What comes back: specific topic suggestions tied to the queries where competitors are appearing and you aren't. Not generic content advice — recommendations grounded in your actual data. The output is ready to drop into a brief.
5. "Write that article and publish it to WordPress"
Ask: "Create a draft for that article and push it to my WordPress site."
What comes back: a full draft with schema markup applied, published as a WordPress draft directly through the MCP connection. No copy-pasting. No switching tabs. The insight and the response live in the same workflow.
6. "Set up a weekly alert if my visibility drops"
Ask: "Notify me every Monday if my overall AI visibility score falls more than three points week-over-week."
What comes back: a configured alert. You don't need to remember to check. The data comes to you.
Why this is different from pasting a screenshot into ChatGPT
People try to get AI analysis on their data by screenshotting a dashboard or exporting a CSV and uploading it. That works, once. The LLM can read what you give it, but it can't ask follow-up questions against live data, it doesn't know what changed since the last time you uploaded something, and it has no context about your specific competitors or content history.
The MCP connection gives Claude persistent, structured access to your Reaudit account. When you ask "why did my visibility dip this week?" it can cross reference citation volume, sentiment changes, and competitor activity at the same time, because all of that data is available to it. The answer is grounded in your situation, not generic advice about AI visibility in general.
That's the actual shift. Not speed. Quality of thinking.
The strategic questions this unlocks
Once the connection is live, the questions get more interesting than "what's my score."
You can ask: "What would it take to outrank [competitor] on Perplexity?" and get a gap analysis with a content plan attached. You can ask: "How does my brand appear differently in French vs English AI responses?" and get a localisation read without pulling separate reports. You can ask: "Should I focus on Google AI or Perplexity this quarter?" and get an answer based on where your citation velocity is stronger.
None of this requires SQL, API calls, or exporting anything. It requires knowing what question to ask.
How to set it up
The MCP server is published as @reaudit/mcp-server on npm. Install it once, authenticate through Reaudit's OAuth 2.0 flow — there's no manual API key to manage and connect Claude Desktop or Cursor through the MCP client settings.
Once it's connected, all 92 Reaudit tools are available through natural language. Visibility scores, brand mentions, competitor comparisons, citation sources, content generation, WordPress publishing, social scheduling, alerts, all of it accessible from a single conversation.
For a non-technical marketer, the setup takes under ten minutes. The harder part is deciding which question to ask first.
FAQ
What is an AI visibility score?
It aggregates citation volume, sentiment, and relevance across the AI engines Reaudit tracks, expressed on a 0-100 scale updated daily.
Do I need a developer to set this up?
No. The MCP server installs via npm and the OAuth authentication flow is handled through Reaudit's UI. You follow a setup wizard — no manual configuration required.
Which AI engines does Reaudit track?
The Starter plan covers Google AI Overview, Google AI Mode, Perplexity, and ChatGPT. The Professional plan adds Claude, Gemini, Microsoft Copilot, Meta AI, DeepSeek, Grok, and Mistral — eleven engines total.
Does the MCP work with tools other than Claude Desktop?
Yes. Any MCP compatible client works Cursor is the other common setup. As the MCP ecosystem grows, more clients will support it.
How is my data handled?
The MCP server connects to your Reaudit account via OAuth. The LLM reads your data through that authenticated connection it doesn't store or retain it separately.