Keywords AI

BLOG

Claude Code Hooks & Cursor Agent Tracing: Setup Guide for AI Code Assistant Observability

Claude Code Hooks & Cursor Agent Tracing: Setup Guide for AI Code Assistant Observability

January 12, 2026

Cursor and Claude Code are AI coding assistants that think, edit files, run shell commands, and call tools on your behalf. But you have no visibility into what they're doing. Every thinking block, tool call, and file edit happens without any way to inspect it.

This guide covers how to set up hooks that capture agent activity and send hierarchical traces to Keywords AI.

Cursor agent tracing visualization

Table of Contents

  1. Why You Need Observability for Cursor and Claude Code
  2. What Are Hooks in AI Code Assistants
  3. Cursor Agent Tracing Setup
  4. Claude Code Hooks Setup
  5. What Data Gets Captured
  6. How to Debug with Agent Traces
  7. Claude Code vs Cursor Comparison

Why You Need Observability for Cursor and Claude Code

When you use Cursor or Claude Code, the agent:

  • Thinks through the problem (reasoning blocks)
  • Reads your files
  • Edits code
  • Runs shell commands
  • Calls MCP tools

Without observability, you can't answer basic questions:

  • Why did the agent make that change?
  • What files did it read?
  • How long did each step take?
  • Did it fail silently?

With Keywords AI, every agent turn becomes a hierarchical trace:

cursor_abc123_xyz789 (38.9s)
├── Thinking 1 (0.5s) - "Let me analyze the code..."
├── Thinking 2 (0.3s) - "I should update the function..."
├── Edit: utils.py (0.1s)
├── Shell: npm test (4.1s)
├── MCP: list_logs (0.8s)
└── Thinking 3 (0.2s) - "Tests passed, done."

Now you can see exactly what happened in each agent turn.


What Are Hooks in AI Code Assistants

Both Cursor and Claude Code provide hooks, which are extension points that fire during agent execution:

EventDescription
Before promptUser submits a message
After thinkingAgent produces a reasoning block
After tool callAgent uses a tool (file, shell, MCP)
After responseAgent completes its turn
StopAgent stops (user cancels or completion)

Our integration uses these hooks to capture events in real-time and send them to Keywords AI as structured traces.

The mental model:

Agent Event → Hook Fires → Python Script → Keywords AI API

Cursor Agent Tracing Setup

Cursor agent tracing visualization

Cursor provides rich hooks that fire during agent execution. The architecture captures events as they happen:

Cursor Agent Hooks Reference

HookTriggerData Captured
beforeSubmitPromptUser sends promptUser input, start time
afterAgentThoughtAgent produces thinkingThinking text, duration
afterShellExecutionShell command completesCommand, output, exit code
afterFileEditFile editedFile path, edits, diff
afterMCPExecutionMCP tool completesTool name, input, output
afterAgentResponseAgent respondsResponse text (creates root span)

Cursor hooks fire in real-time. Each event arrives as it happens, so you get streaming observability into agent behavior.

Quick Setup

Setting up Cursor observability takes just 3 steps:

  1. Set environment variables (KEYWORDSAI_API_KEY, TRACE_TO_KEYWORDSAI)
  2. Download the hook script to ~/.cursor/hooks/
  3. Configure hooks.json with all hook events

👉 Full Cursor Setup Guide →


Claude Code Hooks Setup

Claude Code agent tracing visualization

Claude Code takes a different approach. Instead of real-time hooks, it stores conversation transcripts as JSONL files. The hook fires after the agent response completes:

How Claude Code Hooks Work

Claude Code uses a Stop hook that triggers after each agent turn. The hook script:

  1. Locates the active transcript file
  2. Reads new messages since last processing
  3. Parses thinking blocks, tool calls, and responses
  4. Builds hierarchical spans
  5. Sends to Keywords AI

This post-hoc approach means Claude Code captures richer metadata (like token counts) that aren't available during streaming.

Quick Setup

Setting up Claude Code observability takes just 3 steps:

  1. Set environment variables (KEYWORDSAI_API_KEY, TRACE_TO_KEYWORDSAI)
  2. Download the hook script to ~/.claude/hooks/
  3. Configure settings.json with the Stop hook

👉 Full Claude Code Setup Guide →


What Data Gets Captured

Both integrations capture rich observability data and organize them as hierarchical spans:

Span Types

Spanlog_typeDescription
RootagentThe complete agent turn
ThinkinggenerationReasoning blocks
TooltoolFile/shell/MCP invocations

Common Data

DataDescription
User promptWhat you asked the agent
Assistant responseThe agent's final response
Thinking blocksReasoning/planning content
Tool callsFile reads, writes, shell commands, MCP tools
TimingStart time, end time, duration per span

Claude Code Bonus: Token Usage

Claude Code captures additional metrics not available in Cursor:

DataDescription
Token usagePrompt tokens, completion tokens, cache tokens
Model nameWhich model was used (e.g., claude-sonnet-4-20250514)
Cache infoCache creation and read tokens

How to Debug with Agent Traces

Example workflows:

"Why did the agent take so long?"

  1. Open the trace in Keywords AI
  2. Look at the span timeline
  3. Find the longest-running span
  4. Investigate: Was it a slow shell command? A large file read? An MCP timeout?

"The agent made the wrong edit"

  1. Find the trace for that turn
  2. Read the thinking spans leading up to the edit
  3. See what context the agent had (what files it read)
  4. Identify where its reasoning went wrong

"Compare two approaches"

When you ask the agent to solve a problem differently:

  1. Pull traces for both attempts
  2. Compare thinking patterns
  3. Compare tool usage (did one read more files?)
  4. Compare durations (which was faster?)

"Track agent behavior over time"

  • Are turns getting faster or slower?
  • Is the agent using more or fewer tool calls?
  • Are certain tools failing more often?

Claude Code vs Cursor Comparison

FeatureCursorClaude Code
Hook typeMultiple real-time hooksSingle Stop hook
Data sourceJSON via stdinJSONL transcript files
TimingReal-time (as events happen)Post-hoc (after response)
Token usageNot availableFull usage details
Cache infoNot availableCache creation/read tokens
MCP supportYesYes

Both tools give you full observability into agent behavior. The main difference is that Claude Code also captures token usage, which is useful for cost tracking.


Get Started

With Keywords AI hooks, you get:

  • Every thinking block the agent produces
  • Every tool call with inputs and outputs
  • Duration for each span
  • Token usage (Claude Code only)

Setup takes about 5 minutes.

  1. Get your Keywords AI API key
  2. Follow the setup guide:

Resources

About Keywords AIKeywords AI is the leading developer platform for LLM applications.
Keywords AIPowering the best AI startups.
Claude Code Hooks & Cursor Agent Tracing: Setup Guide for AI Code Assistant Observability