State Sources
State sources tell DriftWise where your Terraform state lives. During drift detection, DriftWise fetches state from these sources and compares it against live cloud infrastructure.
Supported Kinds
| Kind | Description |
|---|---|
s3 | AWS S3 bucket |
gcs | Google Cloud Storage bucket |
azure_blob | Azure Blob Storage container |
tfc | Terraform Cloud / Terraform Enterprise |
upload | Manual state file upload |
Adding a State Source
Via the UI
- Go to State Sources in the sidebar
- Click Add State Source
- Select the kind and fill in the configuration
- Optionally link it to a cloud account
Via the API
curl -X POST "https://app.driftwise.ai/api/v2/orgs/$ORG_ID/state-sources" \
-H "x-api-key: $DRIFTWISE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"display_name": "Production State",
"kind": "s3",
"config": { ... },
"cloud_account_id": "optional-account-uuid"
}'
The cloud_account_id field is optional. When set, DriftWise links drift results to the specific cloud account. When omitted, the state source is org-wide.
Configuration by Kind
S3
{
"display_name": "AWS Production State",
"kind": "s3",
"config": {
"bucket": "my-terraform-state",
"prefix": "env/prod/",
"region": "us-east-1"
},
"cloud_account_id": "<aws-account-uuid>"
}
The state source uses the linked cloud account's AWS credentials to read the bucket. If you use a separate state bucket account, create a cloud account for it with s3:GetObject and s3:ListBucket permissions.
GCS
{
"display_name": "GCP Production State",
"kind": "gcs",
"config": {
"bucket": "my-terraform-state",
"prefix": "env/prod/"
},
"cloud_account_id": "<gcp-account-uuid>"
}
The linked cloud account's GCP credentials must have storage.objects.get and storage.objects.list on the bucket.
Azure Blob
{
"display_name": "Azure Production State",
"kind": "azure_blob",
"config": {
"storage_account": "myterraformstate",
"container": "tfstate",
"prefix": "env/prod/"
},
"cloud_account_id": "<azure-account-uuid>"
}
The linked cloud account's service principal must have Storage Blob Data Reader on the storage account:
az role assignment create \
--assignee "<SP_APP_ID>" \
--role "Storage Blob Data Reader" \
--scope "/subscriptions/$AZURE_SUB_ID/resourceGroups/<RG>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT>"
Terraform Cloud
{
"display_name": "TFC Production",
"kind": "tfc",
"config": {
"token": "your-tfc-api-token",
"organization": "my-org",
"workspace": "prod-infra"
}
}
Generate a Terraform Cloud API token with read access to the workspace. No cloud account link is needed — TFC authenticates with its own token.
Upload
For one-off or local state files, upload directly:
{
"display_name": "Manual Upload",
"kind": "upload",
"config": {}
}
Upload state files through the UI or via the analyze endpoint's plan_json field.
How State Sources Are Used
State sources are not synced on a schedule — they are fetched on-demand when drift is computed:
- A cloud scan completes (live resources discovered)
- You trigger drift computation (manually or via scheduled scan)
- DriftWise fetches the latest state from all linked state sources
- State is compared against live resources to produce drift items
The last_fetched_at field on each state source shows when it was last read.
Listing State Sources
curl "https://app.driftwise.ai/api/v2/orgs/$ORG_ID/state-sources" \
-H "x-api-key: $DRIFTWISE_API_KEY"
Returns paginated results. Use ?limit=N&offset=N for pagination (default limit: 100, max: 500).