Skip to content
BLACKLAKE
Docs▾ docs nav
<!-- BL-ENT-15 · written 2026-05-07 · update when underlying state changes -->

Threat Model (Public)

This document describes the assets BlackLake protects, the threat actors we consider in scope, the attack surfaces we have designed against, and the mitigations that exist in code today. It is a public-safe version — it omits operator-only internal details but covers the structure buyers and security reviewers ask about.

This document pairs with docs/SECURITY-POSTURE.md, which describes the per-capture-path data boundaries. Read both together.


Assets#

AssetDescriptionWhy it matters
Decision tokensHMAC-signed receipts for every governance call (bldt_v1:…, bldt_v2:…)Tampered or forged tokens let an agent claim it was governed when it was not
Policy snapshotsImmutable copy of the policy that produced a decision, stored on policy_evaluations.policy_snapshotRetroactively changing policy rules could alter the apparent decision
Audit logpolicy_evaluations, cost_records, external_events in PostgresThe log is the evidence chain; deletion or injection creates false audit trails
Customer API keysapi_keys.key_hash in Postgres; raw key held only by customerStolen key gives an attacker full API access as that workspace
Magic-link approval tokensHMAC-signed, time-limited, single-use tokens delivered by emailStolen token allows an attacker to approve or reject a pending action
BYO provider credentialsIn current deployments: held only in customer environments, not in BlackLake. In future hosted Depth: encrypted workspace secretsProvider keys allow unbounded LLM spend charged to the customer
Webhook secretsBLACKLAKE_WEBHOOK_KEK (the root key-encryption key)Compromise allows forging decision tokens and webhook signatures for all workspaces
MCP upstream credentialsHeaders stored on upstream registrations (API keys for Linear, GitHub, etc.)Upstream key compromise allows unauthenticated tool calls on behalf of the customer

Threat actors#

ActorDescriptionIn-scope for this model
External attackerNo BlackLake account; attacking API endpoints directlyYes
Malicious workspace memberHas a valid API key; attempts to escalate within or across workspacesYes
Compromised AI agentAn agent that has been prompted or manipulated into attempting unauthorized actionsYes — this is the primary use case BlackLake governs against
Supply-chain attackerCompromised npm package or dependencyPartial — code review and npm-audit in CI provide partial mitigation
Infrastructure attackerAccess to Cloud Run, Cloud SQL, GCS, or the deployment environmentPartial — Google Cloud IAM and Workload Identity are the primary controls; this model covers application-layer mitigations

Attack surfaces#

1. Govern endpoint (POST /v1/govern)#

What it does: Accepts an agent name, tool name, and action payload; evaluates against workspace policies; returns a decision and signed token.

Threats:

  • Agent spoofing: An attacker with a valid API key calls govern() using a different agent name to get a more permissive policy decision.

    • Mitigation: Policy selectors match on agent name; changing the agent name produces a different policy match. Evaluations record the exact agent name used. Session actor identity (BL-SD-27) will tighten this further when shipped.
  • Action payload injection: An attacker crafts a malicious action payload to satisfy a cost-condition check (e.g., fake a low token estimate).

    • Mitigation: Cost conditions in apps/api/src/lib/cost-policy.ts evaluate against the payload as supplied; they do not independently verify the payload. Pre-spend estimates are advisory. Actual spend is tracked post-execution via action_results and cost_records.
  • Rate-limit abuse: Flooding govern() to exhaust workspace API quota or drive up costs for other tenants.

    • Mitigation: apps/api/src/lib/rate-limit.ts enforces per-workspace rate limits backed by rate_limit_buckets in Postgres.
  • Decision-token replay: An attacker captures an allow decision token and presents it as proof of governance for a different action.

    • Mitigation: Tokens are bound to evaluation_id | decision by HMAC. The evaluation_id is unique per govern call. Presenting a replayed token to POST /v1/decisions/verify returns valid, but the evaluation record it maps to carries the original agent, tool, and action payload — an auditor can detect the mismatch.

2. MCP proxy#

What it does: Accepts MCP tool calls from AI clients, calls govern(), and (if allowed) forwards the call to the registered upstream MCP server.

Threats:

  • Upstream substitution: An attacker modifies an upstream registration to point to a malicious MCP server.

    • Mitigation: Upstream modifications require API-key authentication and are logged as audit events. The upstream URL and headers are stored per-workspace; cross-workspace contamination is not possible without cross-workspace API key access.
  • Tool-call argument injection via prompt injection: An LLM agent is prompted to craft tool arguments that bypass policy conditions (e.g., changing the amount field to an allowed range before the govern() call, then using a different value in the actual call).

    • Mitigation: BlackLake governs the arguments it receives in the action payload. It cannot verify what the agent actually sends to the upstream tool vs what was in the govern call. action_results catch post-execution discrepancies when the outcome is observable. This is a fundamental limit of the governance model — explicit documentation is the mitigation.
  • Stdio upstream access (local mode): Local stdio upstreams run as child processes on the developer's machine with the same OS privileges as the blacklake serve process.

    • Mitigation: This is a local-mode design — stdio upstreams are explicitly customer-owned processes. Cloud mode uses HTTP upstreams only.

3. Webhook delivery#

What it does: apps/api/src/lib/webhooks.ts delivers signed event payloads to customer-registered endpoints when evaluations, approvals, and action results are created.

Threats:

  • Webhook forgery: An attacker POSTs a fake evaluation event to a customer's webhook endpoint.

    • Mitigation: Webhooks are signed with HMAC-SHA256 using a per-webhook secret derived from BLACKLAKE_WEBHOOK_KEK. Customers verify the X-BlackLake-Signature header before processing.
  • Replay attack on webhooks: An attacker captures a legitimate webhook payload and replays it.

    • Mitigation: Webhooks include a timestamp in the signed payload. Customers should reject payloads older than a configurable window (recommended: 5 minutes). No replay-window enforcement is built into the delivery side today; this is a customer-side responsibility.
  • Delivery failure leading to missed events: A single 503 from the customer's endpoint drops the event.

    • Mitigation: apps/api/src/lib/webhooks.ts retries with backoff. A DLQ for permanently-failed events is not yet implemented (BL-OPS-11).

What it does: POST /v1/govern with approval_required outcome creates an approval record and emails a magic-link to the approver. The link carries a signed token that, when clicked, calls POST /v1/approvals/:id/decide.

Threats:

  • Token theft from email: An attacker intercepts the approval email and approves the action.

    • Mitigation: Tokens are 32-byte cryptographically random values (from apps/api/src/lib/tokens.ts), HMAC-signed, and time-limited to 15 minutes (configurable). They are single-use: the token is marked consumed on first use. Email security (TLS, SPF/DKIM/DMARC) is a dependency outside BlackLake's control. Inline approve/reject URLs (apps/api/src/lib/approval-url.ts buildInlineApproveUrl) carry the same token; they are equally sensitive.
  • Approval by wrong user: The email is forwarded; a non-intended person approves.

    • Mitigation: Approver roles on policies (policies.approver_roles) restrict who can decide. Two-person integrity (policies.requires_two_person) requires two separate approvals. The approval record stores the deciding user ID.

5. API key compromise#

Threats:

  • Key leaked in source code or logs: A customer commits their BLACKLAKE_API_KEY to a public repo.

    • Mitigation: Keys are revocable via DELETE /v1/api-keys/:id. Rotation creates a new key; the old key is invalidated. Keys are hashed in the database; a DB leak does not expose live keys (bcrypt hash stored in api_keys.key_hash). BlackLake does not inject keys into logs.
  • Lateral movement across workspaces: A stolen key for workspace A is used to access workspace B.

    • Mitigation: All API routes check organisation_id from the authenticated key. There is no cross-workspace access path in the API. The auth middleware in apps/api/src/middleware/auth.ts enforces this on every request.

What BlackLake explicitly does not protect against#

  • Actions that bypass govern() entirely. If an agent calls a tool directly without routing through BlackLake, no governance occurs. Cloud audit reconciliation can detect this after the fact for events the customer forwards, but it is not a real-time prevention control.
  • Prompt injection that manipulates an agent into changing the action payload after the govern call but before the tool call. BlackLake governs what it receives in the request.
  • Compromise of the customer's infrastructure (developer machine, CI runner, Cloud Run service) where the BlackLake SDK or Depth worker runs. A compromised worker process can make arbitrary govern calls.
  • Loss of email-delivered approval tokens due to email provider compromise.
  • Key-encryption key (BLACKLAKE_WEBHOOK_KEK) compromise. If the KEK is exposed, all decision tokens and webhook signatures for that deployment can be forged.

Annual update cadence#

This document is reviewed and updated as part of the annual security review cycle. Last review: 2026-05-07 (initial version — no prior public threat model existed).