Traces
DriftWise records traces of every LLM call made during plan analysis, drift narratives, and noise fix generation. Traces help you understand how the AI reached its conclusions and debug unexpected results.
Viewing Traces
List traces for a scan
curl "https://app.driftwise.ai/api/v2/orgs/$ORG_ID/scans/<scan_id>/traces" \
-H "x-api-key: $DRIFTWISE_API_KEY"
Returns a summary list:
[
{
"id": "trc-a1b2c3d4e5f6...",
"kind": "narrative",
"model": "claude-sonnet-4-6",
"input_tokens": 4200,
"output_tokens": 850,
"latency_ms": 2340,
"cached": true,
"fallback": false,
"created_at": "2026-04-10T15:30:00Z"
}
]
Get a specific trace
curl "https://app.driftwise.ai/api/v2/orgs/$ORG_ID/traces/trc-a1b2c3d4e5f6..." \
-H "x-api-key: $DRIFTWISE_API_KEY"
Trace Fields
| Field | Description |
|---|---|
id | Trace ID (trc- prefix + 24 hex chars) |
kind | Operation type — see Trace Kinds below |
model | LLM model used |
input_tokens | Token count for the input prompt |
output_tokens | Token count for the response |
latency_ms | End-to-end LLM call time in milliseconds |
cached | Whether the prompt cache was hit |
fallback | Whether fallback (non-LLM) generation was used |
error | Error message if the LLM call failed |
inputs | The data sent to the LLM (plan JSON, resource state, etc.) |
parsed | Structured output extracted from the LLM response |
Admin vs. Regular View
Traces contain sensitive data (full prompts and raw LLM responses). Access is tiered:
| Field | Regular user | Platform admin |
|---|---|---|
| ID, kind, model, tokens, latency | Yes | Yes |
| Inputs and parsed output | Yes | Yes |
| System prompt | - | Yes |
| User prompt | - | Yes |
| Raw LLM response | - | Yes |
Regular users see enough to understand what data was analyzed and what the LLM concluded. Admins see the full prompts for debugging prompt engineering and injection issues.
Trace Kinds
| Kind | When generated |
|---|---|
narrative | Plain-English narrative for plan analysis |
drift-narrative | Narrative for cloud drift vs. Terraform state |
classify | Risk classification for a single change |
plan-noise-fix | AI-generated fix for a noisy plan pattern |
iac-gen | Terraform code generated from live resources |
Retention
Traces are automatically purged 30 days after creation. Download traces you need to retain beyond that window.
Use Cases
- Debugging unexpected risk scores — check what data the LLM received and how it reasoned
- Monitoring LLM costs — track token usage across scans
- Prompt cache efficiency — the
cachedflag shows when prompt caching saved tokens - Fallback detection — when
fallback: true, the LLM was unavailable and a deterministic fallback was used instead