LIVE OBSERVABILITY LAYER

See every request
your AI Coding Agent
makes.

Claude Code Claude Code
Cursor Cursor
Windsurf Windsurf
Cline Cline
OpenCode OpenCode
OpenClaw OpenClaw

Works with every tool your team already runs.

Claude Code users
You ran it for hours. Find out exactly what it did, what it read, what it sent.
AI coding teams
Cursor, Cline, Windsurf, Claude Code — all in one view.
AI agencies
Deliver agents with full audit logs built in.
leanmcp // agent log recording
time
agent
request
model
tokens
status
14:28:01
Claude Code
implement stripe webhook — reading src/config/.env
claude-sonnet
5,841
⚠ SECRET
14:22:18
Cursor
debug this error: connection refused db:5432
gpt-4o
2,341
⚠ SECRET
14:19:55
Cline
write tests for the user auth module
claude-sonnet
3,102
OK
14:17:30
Cursor
AWS_SECRET=AKIAIOSFODNN7EXAMPLE in context
gpt-4o
892
BLOCKED
14:15:02
Cline
refactor the payment processing module
claude-sonnet
4,210
OK
What you get

Visibility. Cost.
Risk. Control.

01 · VISIBILITY

See every request

Every AI call your tools make — what was sent, which model, how many tokens. Plain language, no technical knowledge needed.

02 · COST

Track every cent

Broken down by person, tool, and session. Set limits before the bill surprises you.

03 · RISK

Catch sensitive data

Passwords, tokens, customer data — detected before they leave. Alert sent the moment something is found.

04 · CONTROL

Know who runs what

One view of every person, every tool, every session. Policies, limits, and an audit log you can share.

Claude Code scenario

Claude Code ran overnight. What did it send?

Ran for 4 hours, made 400 calls, read through your entire src/ directory. Did it touch a .env file? Send a database string? Hit an external URL you didn't authorize? No way to find out.

LeanMCP: full per-session trace, file access log, secret detection alerts
Cursor / Cline / Windsurf scenario

Your .env file got sent. You didn't know.

Cursor's .cursorignore didn't protect it. Your .env with AWS_SECRET_ACCESS_KEY got included in context and sent to OpenAI's servers. No log, no alert — until the breach.

Detected by LeanMCP: AWS Access Key pattern in request body → alert sent
Team scenario

5 engineers, 5 agents. One bill, no breakdown.

OpenAI bill: $1,200. Which agent? Which engineer? Which session burned $300 in one run? No breakdown, no idea where to start cutting.

LeanMCP shows: cost per agent, per user, per session, per model
Get started

3 steps.
Under 10 minutes.

01

Connect your tool.

One setting. 2 minutes. Claude Code, Cursor, Windsurf, Cline, OpenClaw.

02

Run a session.

Everything is logged. Files read, requests made, tokens used, cost per session.

03

Open the dashboard.

app.leanmcp.com/observability
— your first log is already there.

Claude Code
Claude Code
Edit ~/.claude/settings.json
~/.claude/settings.json — recommended
{ "autoUpdatesChannel": "latest", "env": { "ANTHROPIC_BASE_URL": "https://aigateway.leanmcp.com/v1/anthropic", "ANTHROPIC_AUTH_TOKEN": "leanmcp_your_api_key_here", "ANTHROPIC_API_KEY": "" } }
shell — alternative (add to ~/.zshrc or ~/.bashrc)
export ANTHROPIC_BASE_URL=https://aigateway.leanmcp.com/v1/anthropic export ANTHROPIC_AUTH_TOKEN=leanmcp_your_api_key_here export ANTHROPIC_API_KEY="" # must be empty — conflicts if set
ANTHROPIC_API_KEY must be set to “” — if both keys exist, they conflict and requests fail.
Full guide →
Cursor
Cursor
Settings → Models → OpenAI API Key section
Cursor Settings → Models
// Override OpenAI Base URL: https://aigateway.leanmcp.com/v1/openai // OpenAI API Key field: leanmcp_your_api_key_here
Cmd+, (Mac) or Ctrl+, (Win) to open settings. Restart Cursor after saving.
Full guide →
Cline
Cline
Edit ~/.cline/data/globalState.json
~/.cline/data/globalState.json
// Close Cline before editing this file { "actModeApiProvider": "leanmcp", "planModeApiProvider": "leanmcp", "apiProvider": "leanmcp", "leanmcp": { "apiKey": "leanmcp_your_api_key_here", "baseURL": "https://aigateway.leanmcp.com/v1/anthropic", "modelId": "anthropic/claude-opus-4.5" } }
Close Cline before editing — it overwrites the file on exit.
Full guide →
Windsurf
Windsurf
Settings → search “AI” → custom endpoint
Windsurf settings.json (alternative)
{ "ai.provider": "openai-compatible", "ai.baseUrl": "https://aigateway.leanmcp.com/v1/openai", "ai.apiKey": "leanmcp_your_api_key_here" }
Or set directly in UI: Settings → LLM Providers → Base URL → paste endpoint → enter key → Save.
Full guide →
OpenCode
OpenCode
opencode.json in project root or ~/.config/opencode/
opencode.json
{ "$schema": "https://opencode.ai/config.json", "provider": { "leanmcp": { "npm": "@ai-sdk/openai-compatible", "name": "LeanMCP Gateway", "options": { "baseURL": "https://aigateway.leanmcp.com/v1/anthropic/", "apiKey": "leanmcp_your_api_key_here" }, "models": { "claude-sonnet-4-5": { "name": "claude-sonnet-4-5" } } } }, "model": "leanmcp/claude-sonnet-4-5" }
Run opencode then /models to confirm the provider appears.
Full guide →
OpenClaw
OpenClaw
Edit config.yaml in your OpenClaw project
config.yaml
openai_base_url: "https://aigateway.leanmcp.com/v1/openai" openai_api_key: "leanmcp_your_api_key_here"
Every OpenClaw session is logged by skill, by model call, and by cost.
Full guide →
See it in action — 30 seconds
Get started free → app.leanmcp.com
How it works

One URL change.
Complete visibility.

Visibility.
Every request.

See exactly what was sent to every AI model — in plain text
Which tool triggered it — OpenClaw, Cursor, Cline, Windsurf, Claude Code, OpenCode, or your own code
How many tokens it used and what it cost
Sensitive data flagged and highlighted inline
Searchable history — find any session, any request
Export to CSV for audits or compliance
LEANMCP // AGENT LOG LIVE
TIME
AGENT
REQUEST
MODEL
TOKENS
STATUS
14:28:01
Claude Code
implement the stripe webhook handler
claude-sonnet
5,841
OK
02:14:33
OpenClaw
reading /projects/api/.env for context...
gpt-4o
3,102
BLOCKED
02:11:17
OpenClaw
deploy this service via AWS CLI
claude-4.6
1,847
⚠ REVIEW
14:22:18
Cursor
debug: connection refused at db:5432
gpt-4o
2,341
⚠ SECRET
14:19:55
Cline
write tests for the user auth module
claude-4.6
3,102
OK
14:17:30
Cursor
AWS_SECRET=AKIAIOSFODNN7EXAMPLE
gpt-4o
892
BLOCKED

Cost.
Down to the agent.

Cost per person — see who spent what, on which tool
Cost per session — know which overnight run cost $28
Compare models — GPT-4o vs GPT-4o-mini across same tasks
Budget alerts via email or Slack at any threshold
Hard limits — block requests when spend hits your ceiling
COST BREAKDOWN — THIS MONTH
── By Agent ────────────────────────── OpenClaw (overnight) 847 calls $28.40 Cursor (alex@) 612 calls $14.20 Cline (sarah@) 203 calls $6.80 ────── Total this month $49.40 ── Budget Alert ────────────────────── Monthly limit: $60.00 Current spend: $49.40 (82%) ⚠ Alert sent via Slack · $50 threshold

Risk.
Know before it leaves.

Detects passwords, API keys, and private tokens in any request
Detects customer emails, phone numbers, credit card numbers
Detects internal files and business data being sent to models
You choose the action — log it, block it, or alert your team
Real-time alerts via email or Slack when something is found
🔑
API Keys & Secrets
AWS Access Keys
GitHub Tokens
OAuth Tokens
Private keys RSA/SSH
🗄️
Database & Infra
Connection strings
DB passwords
Internal URLs
Private IPs
👤
Personal Data
Email addresses
Phone numbers
Credit card numbers
SSNs
🏢
Business Data
Customer data patterns
Financial figures
Internal configs
Trade secrets

Team control.
One view for the whole team.

Per-person view — see what each team member's tools are doing
One shared dashboard — all agents, all spend, all in one place
Team policies — rules that apply to everyone automatically
Anomaly alerts — flag when any agent behaves unusually
Audit log — a record you can export and share when asked
👥
Per-user view
See alex@, sarah@, and dev@ separately. Know who's running which agent and how much it costs per person.
🛡️
Team policies
Set rules once — apply to all agents for all users. Block certain models, require approval for large sessions, limit spend per day.
📋
Audit log
Full exportable history of every request, every agent action, every alert. For your own records or for a customer who asks.
Your data, your rules

We never train
on your data.
You stay in control.

What we log

Request timing, model used, token counts, cost, and status. Your request content is stored in your account only — for your visibility. You set the retention period. Delete anytime.

What we never do

We never train on your data. Never read your content except to run the secret detection you configured. Never share or sell it. Your code stays yours.

You stay in control

Delete your logs from your dashboard anytime. Export everything before deleting. Your data leaves when you say so.

Compatible tools

Works with every AI tool
your team runs today.

Your engineers change one setting in each tool. That's it. Nothing else changes about how they work.

Claude Code
Full agentic session tracing for every Claude Code run.
● Live
Setup guide →
Cursor
Change one setting. Every request logged immediately.
● Live
Setup guide →
Cline
Full request visibility including every file pulled in.
● Live
Setup guide →
Windsurf
Drop-in via one settings change.
● Live
Setup guide →
OpenCode
Two env vars and every session is fully logged.
● Live
Setup guide →
OpenAI
● Live
Anthropic
● Live
xAI / Grok
● Live
Fireworks
● Live
ElevenLabs
● Live
8,000+
developers
2M+
requests logged
$0
data sold
“Connected Claude Code in 3 minutes. First session showed it had read my .env file twice and sent the contents to Anthropic. I didn’t know that was even possible.”
— Solo developer · Claude Code + LeanMCP
“Our team was spending $1,200/month on OpenAI. LeanMCP showed one engineer’s overnight Cursor session was burning $300 in a single run. We fixed it the next morning.”
— Engineering lead, 6-person team · Cursor + LeanMCP
“Set it up before a client demo. Showed the client their Claude Code agent’s full request log in real time. They signed the contract.”
— AI agency founder · Claude Code + LeanMCP
Pricing

Try it yourself.
Or set it up
for your team.

Try it yourself
Pro
$29 / month
Connect one or more AI tools. See everything — costs, requests, alerts. Know what's happening within 10 minutes of signing up.
What you get →
Full dashboard — 30-day log history
Cost breakdown by person, tool, and day
Real-time alerts when sensitive data is detected
Budget limits — stop spending before it surprises you
Export logs for audits or compliance
See your first log → app.leanmcp.com
Set it up for your team
Team
$99 / month
When a client or investor asks what your AI tools are doing with their data — this is your answer.
Everything in Pro, plus →
Up to 10 team members — each with their own view
Per-person cost and activity breakdown
Policies that apply to everyone automatically
Shared audit log — exportable for clients or compliance
Onboarding call to get every team member connected
Talk to us for team setup →

How credits work: Credits cover the cost of routing your AI requests through our gateway — roughly the same as going direct to OpenAI, plus a tiny extra fee (~0.5%). Most teams spend $2–10/month on credits on top of their existing model costs. The subscription adds the dashboard features. Both are separate and optional to start.

Common questions
Do I need a subscription, or just credits?
Credits alone are enough to use the gateway and see basic logs (7-day history, no alerts). The Pro ($29/mo) and Team ($99/mo) subscriptions add the features that turn logs into action: 30-day history, real-time Slack/email alerts when a secret is detected, budget limits, cost breakdowns, and team controls. Most serious users subscribe once they see their first alert.
What are credits exactly?
Credits pay for token routing through the LeanMCP gateway. When your tools make a request to OpenAI or Anthropic via the gateway, a tiny amount of credits is deducted, roughly equivalent to ~0.5% of the model call cost. Your model provider charges your original API key separately as normal. Credits are just for the gateway routing layer.
Does the subscription replace my API credits?
No. They are separate. Credits pay for routing (small per-request fee). The subscription pays for dashboard features. Both are billed independently. You need credits to use the gateway regardless of whether you have a subscription.
What’s the right amount of credits to start with?
Start small. If you route 1,000 GPT-4o requests through the gateway per day, that’s roughly $0.05/day in gateway credits. Most individual developers spend $2–10/month on credits on top of their model costs.
I have more than 10 team members. What then?
Email us and we’ll set up a custom plan. Teams with 10–50 members are our sweet spot right now. Enterprise pricing and compliance documentation available on request.

Your agents are running right now.
See your first log in under 10 minutes.

app.leanmcp.com · buy credits · change one URL · open dashboard

Go to app.leanmcp.com →