Skip to main content

Documentation Index

Fetch the complete documentation index at: https://traceroot.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Automatically capture module executions, signature predictions, and underlying LLM calls in DSPy programs.

Setup

import traceroot
from traceroot import Integration

traceroot.initialize(integrations=[Integration.DSPY])

Usage

Once initialized, every DSPy module call and underlying LLM request is traced automatically:
import dspy
import traceroot
from traceroot import Integration

traceroot.initialize(integrations=[Integration.DSPY])

# DSPy resolves the API key from OPENAI_API_KEY in the environment.
dspy.configure(lm=dspy.LM("openai/gpt-4o-mini", max_tokens=1024))


class CoTQA(dspy.Module):
    """Chain-of-thought question-answering module."""

    def __init__(self):
        super().__init__()
        self.cot = dspy.ChainOfThought("question -> answer")

    def forward(self, question: str):
        return self.cot(question=question)


qa = CoTQA()

# The forward call, the chain-of-thought step, and the LLM call are all captured
result = qa(question="Why does ice float on water?")
print(result.reasoning)
print(result.answer)

traceroot.flush()

What Gets Captured

AttributeDescription
Module callsEach Module.__call__ / Module.forward invocation
PredictorsPredict, ChainOfThought, ReAct, etc. as nested spans
SignaturesInput/output fields declared on each signature
LLM callsRaw completion requests to the configured dspy.LM
Tokens & CostAggregated token usage and pricing
LatencyDuration per module call and per LLM call

Run the example

Clone the repo and run a complete agent end-to-end.

Python

Run the Python example