The MCP for performance engineering

Introducing Remote MCP Support in Polar Signals Cloud

July 17, 2025

Picture this: You're staring at a flame graph, trying to decipher why your application is running slower in production. The data is all there, but translating those colorful rectangles into actionable insights requires expertise and time. What if you could simply ask, "Why is my service slow?" and get an answer backed by real profiling data?

Today we're excited to announce that Polar Signals Cloud now supports the Model Context Protocol (MCP), bringing the power of AI to performance analysis. This integration transforms how developers interact with profiling data, making performance optimization as simple as having a conversation.

What is the Model Context Protocol?

The Model Context Protocol is an open standard that enables AI assistants like Claude to connect directly with external data sources and tools. Think of it as a universal adapter that lets AI applications speak fluently with your performance data. With MCP, Claude doesn't just theorize about performance issues - it analyzes your actual production profiles.

Two Game-Changing Capabilities

1. Performance Analysis using Natural Language

With MCP, you can ask questions in plain English:

  • "What's consuming the most CPU in my payment service?"
  • "Show me memory allocations in the authentication module over the last 24 hours"
  • "Which functions in database.go are the biggest bottlenecks?"

Claude Code or Cursor, or really any other LLM now understands your intent and translates it into precise queries against your profiling data. Here's what a typical interaction from Claude Code looks like:

> I want to check why Thanos is using so much CPU. The Parca Agent should have profiles for the last hour. Can you check on what is happening?

● I'll help you investigate why Thanos is using high CPU by analyzing the profiles from Parca Agent. Let me start by exploring the available profile types and then query for
  Thanos-related CPU profiles.
  ⎿  Available Profile Types:                 

     mutex:contentions:count:contentions:count
     … +8 lines (ctrl+r to expand)

● Now let me check for CPU profiles and see what labels are available to filter for Thanos:

● polarsignals - labels (MCP)(profile_type: "parca_agent:samples:count:cpu:nanoseconds:delta")
  ⎿  Available Labels:                 

     1. addonmanager_kubernetes_io_mode
     … +47 lines (ctrl+r to expand)

● Let me check what values are available for the "app" label to find Thanos-related processes:

● polarsignals - values (MCP)(profile_type: "parca_agent:samples:count:cpu:nanoseconds:delta", label_name: "app")
  ⎿  Values for label 'app':      

     1. cert-manager
     … +5 lines (ctrl+r to expand)

● Let me check the "app_kubernetes_io_name" label which might contain Thanos:

● polarsignals - values (MCP)(profile_type: "parca_agent:samples:count:cpu:nanoseconds:delta", label_name: "app_kubernetes_io_name")
  ⎿  Values for label 'app_kubernetes_io_name':

     1. alertmanager
     … +26 lines (ctrl+r to expand)

● Great! I found Thanos components. Let me query for CPU profiles from the last hour for all Thanos components:

● polarsignals - get_profile (MCP)(query: "parca_agent:samples:count:cpu:nanoseconds:delta{app_kubernetes_io_name=~\"thanos-.*\"}", time_range: "1h")
  ⎿  Profile Query Results for: parca_agent:samples:count:cpu:nanoseconds:delta{app_kubernetes_io_name=~"thanos-.*"}
     Time Range: 2025-07-10T15:30:00Z to 2025-07-10T16:30:00Z

     … +40 lines (ctrl+r to expand)

● The profile shows high CPU usage in Thanos components. Let me get more specific information about which Thanos component is consuming the most CPU:

● Task(Analyze Thanos CPU profiles)
  ⎿  Done (13 tool uses · 25.2k tokens · 2m 2.6s)                                                                                                                                       

● Based on the CPU profile analysis, Thanos Query is consuming the most CPU (34.5 trillion samples vs 18.9 trillion for Store). The main culprits are:

  1. Deduplication operations - Taking ~20% of CPU time, particularly:
    - dedupSeriesSet.Next() at pkg/dedup/iter.go:305 - 2.94 trillion samples
    - dedupSeriesSet.fillBuffer() at pkg/dedup/iter.go:404 - 1.82 trillion samples
  2. Downsampling operations - Taking ~9% of CPU:
    - downsampleRaw() at pkg/compact/downsample/downsample.go:1148 - 2.68 trillion samples
  3. Thanos Store shows high network I/O overhead, suggesting heavy query load.

  The deduplication logic appears to be the primary bottleneck, likely processing large amounts of overlapping time series data.

The great thing about the MCP server is that it'll try to find and query the data on different dimensions. If it fires off a query that doesn't work, it'll think about what went wrong and try again with a new query.

2. AI-Driven Code Optimization

This is where things get really exciting. Claude can analyze your profiling data alongside your actual code to suggest specific optimizations. Using MCP with Claude Code or Cursor, you can:

  1. Identify bottlenecks: Claude examines your profiles to find hot spots
  2. Analyze the code: It reads the actual source code of problematic functions
  3. Suggest improvements: Get specific, actionable code changes based on real data

Here is an example using Cursor with the Polar Signals Remote MCP. We prompted Cursor with:

We are looking at Thanos. We want to improve the performance of the Thanos query component. Check the last hour's parca agent profiling data on Polar Signals Cloud and make suggestions on where in our code you would improve things.

Followed by a second prompt:

Where is the most time spent allocating memory? Maybe make an easy improvement in the code.

Getting Started

Setting up Remote MCP with Polar Signals Cloud is straightforward:

First, go to your project's settings and generate a new token.

Second, configure Claude Code or Cursor by adding the Polar Signals MCP server to your configuration:

For Claude Code using the following command:

claude mcp add --transport http polarsignals https://api.polarsignals.com/api/mcp/ --header "Authorization: Bearer eyJhbGciOiJFZERTQSJ9..."

For Cursor open the MCP settings and add the config below:

{
  "mcpServers": {
    "polarsignals": {
      "url": "https://api.polarsignals.com/api/mcp/",
      "headers": {
        "Authorization": "Bearer eyJhbGciOiJFZERTQSJ9..."
      }
    }
  }
}

Note, it is important to keep the trailing slash at the end of the URL.

Third, start asking questions - Claude now has access to your profiling data through four powerful tools:

  • profile_types: Discover available profile types
  • labels: Explore metadata labels in your profiles
  • values: Get specific label values
  • get_profile: Query profiles with PromQL

The future of performance engineering

By combining the analytical power of AI with real production profiling data, we're making performance engineering accessible to every developer, not just performance experts.

Whether you're debugging a CPU bottleneck, tracking down memory leaks, you now have an intelligent assistant that understands your performance data as well as your code. The future of performance engineering isn't just about collecting more data - it's about making that data work for you through intelligent analysis and actionable insights.

Are you ready to revolutionize how you optimize performance? Sign up for Polar Signals Cloud to start a free 14-day trial and experience the power of AI-assisted profiling today. The MCP integration is available now for all users, and we can't wait to see how you'll use it to build faster, more efficient applications.


Have questions or feedback about MCP support? Join our community Discord or reach out to us on Twitter/X and LinkedIn. We'd love to hear about your experience with AI-assisted performance analysis.

Discuss:
Sign up for the latest Polar Signals news