Keywords AI
Most MCP articles stop at "here are some MCP servers." This guide shows you how to use MCP as an interface to Keywords AI observability data (logs, traces, prompts, customers) inside coding assistants like Cursor and Claude Code.
Your Keywords AI MCP server is not “another tool integration.” It’s a bridge that lets your AI assistant query your observability system directly from the chat box / IDE. The docs say it plainly: it gives real-time access to Keywords AI logs, traces, prompts, and customer data in your coding environment.
So the mental model is:
list_logs, list_traces, get_trace_tree, prompt/version fetchers, customer budget usage, etc.This is why your blog shouldn’t read like “Top MCP servers.” It should read like: “Use MCP to turn your observability data into an assistant-native debugging workflow.”
Your docs call out two supported transports for the Keywords AI MCP server:
Claude Code’s MCP docs add another concept you specifically asked about: SSE—but also says it’s deprecated in Claude Code and HTTP is preferred.
Let’s translate these into “what it means for developers.”
Streamable HTTP means the MCP server is reachable via an HTTP URL, and the transport supports streaming responses as they’re produced (instead of waiting for a single big response). Your hosted endpoint is:
https://mcp.keywordsai.co/api/mcpWhy it’s the best default for Keywords AI MCP:
stdio means your MCP “server” runs as a local process. The client launches it, writes requests to stdin, reads responses from stdout.
Your docs’ local setup is exactly that: clone keywordsai-mcp, build it, then configure Cursor/Claude to run node .../dist/lib/index.js with KEYWORDS_API_KEY in env.
Why devs choose stdio:
Tradeoffs:
SSE is a server→client streaming mechanism over HTTP. In MCP land, some clients historically supported “SSE MCP servers.”
But Claude Code explicitly says: SSE transport is deprecated and you should use HTTP servers instead where available.
Cursor community docs/examples often mention “stdio or sse” as transport options (depending on the server), but for Keywords AI specifically, your doc path is clearer:
Your docs define three installation modes under “Installation”:
Here’s the practical framing (this is the part your audience cares about):
Use when: personal use, fastest setup, minimal maintenance.
Config is basically:
json1{ 2 "mcpServers": { 3 "keywords-ai": { 4 "url": "https://mcp.keywordsai.co/api/mcp", 5 "headers": { 6 "Authorization": "Bearer your_keywords_ai_api_key" 7 } 8 } 9 } 10}
That exact snippet is in your docs, along with Cursor + Claude Desktop config locations.
Use when: offline dev, you want to modify the MCP server, you’re testing.
The docs give the full flow: clone repo, npm install, npm run build, then configure the server as a local node command with env var KEYWORDS_API_KEY.
Use when: you don’t want API keys on every laptop + you want one shared endpoint.
Your docs recommend deploying the MCP server (open source) to Vercel, storing KEYWORDS_API_KEY in Vercel env, then teammates use a URL like:
json1{ 2 "mcpServers": { 3 "keywords-ai": { 4 "url": "https://your-project.vercel.app/mcp" 5 } 6 } 7}
This is explicitly called out as “ideal for teams” because you deploy once and avoid exposing client-side keys.
Your docs point to Cursor’s MCP config at:
~/.cursor/mcp.jsonPut the hosted config in ~/.cursor/mcp.json (from your docs), restart Cursor.
Cursor config switches to a command-based server:
json1{ 2 "mcpServers": { 3 "keywords-ai": { 4 "command": "node", 5 "args": ["/Users/yourname/keywordsai-mcp/dist/lib/index.js"], 6 "env": { 7 "KEYWORDS_API_KEY": "your_keywords_ai_api_key" 8 } 9 } 10 } 11}
Again: directly from your docs.
Some Cursor MCP server directories describe server “type” choices like stdio or sse depending on the server endpoint offered. For Keywords AI, you’re giving people a cleaner story: streamable HTTP URL or stdio local process.
Claude Code’s MCP page is unusually useful because it explains:
Claude Code recommends remote HTTP servers for cloud-based services. So Keywords AI’s hosted MCP server fits perfectly:
https://mcp.keywordsai.co/api/mcpAuthorization: Bearer ... headerClaude Code supports adding HTTP servers via CLI with claude mcp add --transport http ... syntax.
Claude Code still shows an SSE option, but marks it deprecated. If someone asks “should I use SSE?” your blog should say: no, use HTTP where available.
Claude Code supports local stdio servers (great for the local keywordsai-mcp repo setup).
This maps to your “Local Stdio” mode exactly.
Claude Code can be started as a stdio MCP server using:
claude mcp serveThis matters for your audience because it hints at a powerful pattern:
You can wire assistants together via MCP (Claude Code tools exposed to another client), and then use Keywords AI MCP tools inside that same workflow.
That’s how you get “workflow-y” setups without rebuilding everything.
Your docs list “Available tools” once connected. This is the part you should lean on as differentiation:
list_logs: list/filter logs with queriesget_log_detail: fetch a log by IDlist_traces: list/filter tracesget_trace_tree: fetch the span treelist_customers, get_customer_detail (including budget usage)This is the difference between “MCP is a protocol” and “MCP is useful.” Your MCP server basically turns Keywords AI into a queryable tool layer inside the assistant.
Here are use cases that feel “native” to your MCP page (and not generic MCP filler):
In Cursor / Claude Code:
That’s an observability workflow, but executed through MCP.
This is exactly the “different direction” you want: MCP as an interface for debugging and iteration.
You said: “I don’t know how to do this yet but you can think of some ways.” So here are realistic implementation patterns that are consistent with how MCP systems are structured.
If you control the MCP server (your open-source keywordsai-mcp or a fork), you can wrap every tool handler:
record: tool name, arguments (redact sensitive), duration, success/error
emit as:
This gets you:
get_trace_tree?”Why this is clean: it doesn’t require hacking Cursor/Claude clients. It’s purely server-side.
The hard part isn’t logging MCP calls. It’s tying them to the “parent” LLM request.
Two practical correlation strategies:
Session correlation
session_id in the client-side workflow (or project scope)Traceparent-style propagation
This is how you turn “MCP happened” into “MCP happened because the model decided X.”
Because the Keywords AI MCP server provides access to:
logs, traces, customers, prompts …you can also build meta dashboards like:
Most common MCP queries inside Cursor (what engineers ask)
Most used observability dimensions (latency vs cost vs error rate)
Top MCP tools by usage and time
This becomes a feedback loop on your users’ debugging behavior.
Your “Private HTTP (Teams)” mode (Vercel deployment) is perfect for this.
Because everyone goes through one shared URL, you can:
It’s not just “avoid client-side API keys.” It’s also “centralize behavior and auditing.”
https://mcp.keywordsai.co/api/mcpkeywordsai-mcpYour docs already list the most common issues:
Authorization: Bearer ..., env var correctnessAdd one Claude Code–specific footnote that saves people hours:
MAX_MCP_OUTPUT_TOKENS.