Skip to main content
Export traces from Mastra agents to TraceRoot using the @traceroot-ai/mastra exporter package. This integrates with Mastra’s built-in OpenTelemetry observability system.

Setup

Install the exporter alongside your Mastra dependencies:
npm install @traceroot-ai/mastra @mastra/core @mastra/observability
Configure TraceRoot as an exporter in your Mastra instance:
import { Mastra } from '@mastra/core';
import { Observability } from '@mastra/observability';
import { TraceRootExporter } from '@traceroot-ai/mastra';

const exporter = new TraceRootExporter({
  apiKey: process.env.TRACEROOT_API_KEY,
});

const mastra = new Mastra({
  agents: { /* your agents */ },
  observability: new Observability({
    configs: {
      traceroot: {
        serviceName: 'my-mastra-app',
        exporters: [exporter],
      },
    },
  }),
});

Usage

Once configured, all agent calls are traced automatically:
import { Agent } from '@mastra/core/agent';
import { anthropic } from '@ai-sdk/anthropic';

const weatherAgent = new Agent({
  id: 'weatherAgent',
  name: 'Weather Agent',
  instructions: 'You are a helpful weather assistant.',
  model: anthropic('claude-haiku-4-5-20251001'),
  tools: { /* your tools */ },
});

const mastra = new Mastra({
  agents: { weatherAgent },
  observability: new Observability({
    configs: {
      traceroot: {
        serviceName: 'mastra-weather-agent',
        exporters: [exporter],
      },
    },
  }),
});

const agent = mastra.getAgent('weatherAgent');

// Pass a consistent threadId to group calls under the same session
const result = await agent.generate("What's the weather in Tokyo?", {
  threadId: 'session-123',
  resourceId: 'user-456',
});

console.log(result.text);

// Flush traces before process exit
await exporter.flush();

What Gets Captured

AttributeDescription
Agent nameThe Mastra agent ID
ModelUnderlying LLM model name
MessagesInput messages and conversation history
ResponseGenerated text output
Tool callsEach tool invocation with input and output
TokensInput and output token counts
CostCalculated from token usage and model pricing
LatencyRequest duration per span