Skip to main content

Traces

DriftWise records traces of every LLM call made during plan analysis, drift narratives, and noise fix generation. Traces help you understand how the AI reached its conclusions and debug unexpected results.

Viewing Traces

List traces for a scan

curl "https://app.driftwise.ai/api/v2/orgs/$ORG_ID/scans/<scan_id>/traces" \
-H "x-api-key: $DRIFTWISE_API_KEY"

Returns a summary list:

[
{
"id": "trc-a1b2c3d4e5f6...",
"kind": "narrative",
"model": "claude-sonnet-4-6",
"input_tokens": 4200,
"output_tokens": 850,
"latency_ms": 2340,
"cached": true,
"fallback": false,
"created_at": "2026-04-10T15:30:00Z"
}
]

Get a specific trace

curl "https://app.driftwise.ai/api/v2/orgs/$ORG_ID/traces/trc-a1b2c3d4e5f6..." \
-H "x-api-key: $DRIFTWISE_API_KEY"

Trace Fields

FieldDescription
idTrace ID (trc- prefix + 24 hex chars)
kindOperation type — see Trace Kinds below
modelLLM model used
input_tokensToken count for the input prompt
output_tokensToken count for the response
latency_msEnd-to-end LLM call time in milliseconds
cachedWhether the prompt cache was hit
fallbackWhether fallback (non-LLM) generation was used
errorError message if the LLM call failed
inputsThe data sent to the LLM (plan JSON, resource state, etc.)
parsedStructured output extracted from the LLM response

Admin vs. Regular View

Traces contain sensitive data (full prompts and raw LLM responses). Access is tiered:

FieldRegular userPlatform admin
ID, kind, model, tokens, latencyYesYes
Inputs and parsed outputYesYes
System prompt-Yes
User prompt-Yes
Raw LLM response-Yes

Regular users see enough to understand what data was analyzed and what the LLM concluded. Admins see the full prompts for debugging prompt engineering and injection issues.

Trace Kinds

KindWhen generated
narrativePlain-English narrative for plan analysis
drift-narrativeNarrative for cloud drift vs. Terraform state
classifyRisk classification for a single change
plan-noise-fixAI-generated fix for a noisy plan pattern
iac-genTerraform code generated from live resources

Retention

Traces are automatically purged 30 days after creation. Download traces you need to retain beyond that window.

Use Cases

  • Debugging unexpected risk scores — check what data the LLM received and how it reasoned
  • Monitoring LLM costs — track token usage across scans
  • Prompt cache efficiency — the cached flag shows when prompt caching saved tokens
  • Fallback detection — when fallback: true, the LLM was unavailable and a deterministic fallback was used instead