This document defines how runfabric state storage works, how to wire backend credentials, and minimum access requirements. Aligned with upstream STATE_BACKENDS. In this repo the canonical configuration is the backend block in runfabric.yml.
Quick credentials matrix (providers + state backends): CREDENTIALS.md.
Current engine-accepted backend kinds:
local (default)postgres — receipts stored in Postgres; set backend.postgresConnectionStringEnv (env var name for DSN) and optional backend.postgresTable (default runfabric_receipts)sqlite — receipts in a SQLite file; set backend.sqlitePath (default .runfabric/state.db; resolved relative to project root)dynamodb — receipts in DynamoDB; set backend.receiptTable or backend.lockTable, and use provider region for AWSs3gcsazblobrunfabric init prompts for state backend selection and defaults to local.
Implemented (1.5 / 1.6): Deploy state (receipts) can use Postgres, SQLite, or DynamoDB in addition to local and S3. Set backend.kind to postgres, sqlite, or dynamodb and configure the connection (see below). Receipts are stored and fetched via the same backend; dashboard, metrics, traces, and list use it.
backend:
kind: local | postgres | sqlite | s3 | dynamodb | gcs | azblob
s3Bucket: my-state-bucket
s3Prefix: runfabric/state
lockTable: runfabric-locks
gcsBucket: my-state-bucket
gcsPrefix: runfabric/state
azblobContainer: runfabric-state
azblobPrefix: runfabric/state
postgresConnectionStringEnv: RUNFABRIC_STATE_POSTGRES_URL
postgresTable: runfabric_receipts
sqlitePath: .runfabric/state.db
receiptTable: runfabric-receipts
You can use dynamic env bindings in these values:
${env:VAR_NAME}${env:VAR_NAME,default-value}backend.postgresConnectionStringEnv (default RUNFABRIC_STATE_POSTGRES_URL).backend.postgresTable (default runfabric_receipts). Table is created automatically with columns workspace_id, stage, data (JSONB), updated_at.export RUNFABRIC_STATE_POSTGRES_URL="postgres://user:pass@host:5432/dbname?sslmode=require"backend.sqlitePath (default .runfabric/state.db); path is relative to project root. Table runfabric_receipts is created automatically.backend.receiptTable (or backend.lockTable) and ensure provider has region. Table must have partition key pk (String) and sort key sk (String). Items: pk = workspace ID (root path), sk = STAGE#<stage>, data = receipt JSON string, updatedAt = timestamp.AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, optional AWS_SESSION_TOKEN) and region.export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_REGION="us-east-1"
GOOGLE_APPLICATION_CREDENTIALS) or workload identity.AZURE_STORAGE_CONNECTION_STRINGAZURE_STORAGE_ACCOUNT + AZURE_STORAGE_KEYCREATE TABLE, CREATE INDEX (if bootstrap enabled).SELECT, INSERT, UPDATE, DELETE.s3:GetObjects3:PutObjects3:DeleteObjects3:ListBucket (scoped to prefix)storage.objects.getstorage.objects.createstorage.objects.deletestorage.objects.listlocal: rely on host disk encryption policy.postgres: enable database encryption-at-rest on managed service or volume encryption.s3: enable SSE-S3 or SSE-KMS on bucket/prefix.gcs: default Google-managed encryption or CMEK.azblob: storage encryption enabled (Microsoft-managed or CMK).https, sslmode=require, private endpoints where possible).details to state.secret, token, password, credential, apiKey, etc.) is persisted as [REDACTED].runfabric state list -c runfabric.yml --json
runfabric state pull -c runfabric.yml --provider aws-lambda --json
runfabric state backup -c runfabric.yml --out ./.runfabric/backup/state.json --json
runfabric state restore -c runfabric.yml --file ./.runfabric/backup/state.json --json
runfabric state reconcile -c runfabric.yml --json
runfabric state force-unlock -c runfabric.yml --service my-svc --stage dev --provider aws-lambda --json
runfabric state migrate -c runfabric.yml --from local --to postgres --json
local uses .runfabric/state/<service>/<stage>/<provider>.state.json.postgres uses a real table backend (backend.postgresTable) keyed by workspace root + stage.s3, gcs, and azblob use real object storage backends keyed by <prefix>/<service>/<stage>/<provider>.state.json.Runbook-style steps when state or locking gets out of sync:
runfabric plan to see current vs desired state.runfabric deploy; the engine is designed to converge. If the provider left resources behind, remove them manually in the cloud console or run runfabric remove and then deploy again.runfabric recover --dry-run then runfabric recover to reconcile or roll back.runfabric state force-unlock with the same --service, --stage, and --provider to clear the lock, then retry deploy.runfabric state backup with the current backend to export state.runfabric.yml.runfabric state restore from the backup file.runfabric state reconcile to align with the provider if needed.runfabric state migrate for a single-command migration path when supported.runfabric state reconcile compares local state with the provider and can report drift. Use it after manual changes in the cloud or after restoring from backup.RUNFABRIC_TEST_REMOTE_STATE=1.