Monoscope for AI
Tool surfaces for any LLM that talks to Monoscope — over the shell, over the network, or as a packaged Claude Code skill. Same auth, same primitives, same JSON.
Two ways to wire an agent in
investigate, triage, kql-reference) wrap the same commands.
For any MCP client
MCP Server
Hosted Model Context Protocol endpoint with ~50 auto-derived REST tools plus workflow tools (analyze_issue, find_error_patterns, search_events_nl). No install, same API key as REST.
When to pick which
Pick the CLI when the agent already runs in a shell — Claude Code,
Cline, CI jobs, or any environment where bash is the substrate. You
get tabular output for humans on the same binary, plus a curated
investigate → triage skill set for Claude Code.
Pick MCP when the client speaks Model Context Protocol natively (Claude Desktop, Cursor, OpenAI Agents SDK, your own) and you’d rather not ship a binary. Tools auto-derive from the public REST API, so new endpoints become MCP tools without a server release.
Both surfaces use the same project API key and the same KQL dialect. Switching between them is configuration, not migration.