SDK Reference
workflow(name, fn)
Defines a workflow.
import { workflow } from '@blacklake-systems/depth-sdk';
const wf = workflow('my-workflow', async (ctx, input) => {
// ...
return result;
});
Parameters:
name: string— unique workflow name. Used to identify the workflow in the database and for run resumption.fn: (ctx: WorkflowContext, input: TInput) => Promise<TOutput>— the workflow body.
Returns: WorkflowDefinition<TInput, TOutput>
The returned object is typically the default export of a workflow file. The CLI's run command imports it and calls .run().
step(ctx, name, fn)
Executes a step with replay support.
import { step } from '@blacklake-systems/depth-sdk';
const result = await step(ctx, 'my-step', async () => {
return await doWork();
});
Parameters:
ctx: WorkflowContext— the context passed to your workflow function.name: string— unique step name within the run. Determines replay identity.fn: () => Promise<T>— the step body. Called only if the step has not already completed.
Returns: Promise<T>
If the step already completed in a previous run, its stored output is returned without calling fn. A ↩ name (replayed) message is printed.
WorkflowDefinition<TInput, TOutput>
The object returned by workflow().
interface WorkflowDefinition<TInput, TOutput> {
name: string;
fn: (ctx: WorkflowContext, input: TInput) => Promise<TOutput>;
run: (input?: TInput) => Promise<RunResult<TOutput>>;
}
.run(input?)
Starts or resumes the workflow.
- If an existing
runningorpausedrun exists for this workflow, it is resumed. - Otherwise a new run is created.
Returns: Promise<RunResult<TOutput>>
WorkflowContext
Passed as the first argument to your workflow function. Provides LLM access, tool calls, and signalling.
ctx.llm(model, options)
Calls an LLM and returns the text response.
const text = await ctx.llm('anthropic:claude-sonnet-4-6', {
prompt: 'Explain durable execution in one sentence.',
});
Parameters:
model: string— provider and model inprovider:modelformat. Supported:anthropic:*,openai:*,ollama:*.options: LlmOptions
LlmOptions:
interface LlmOptions {
prompt?: string; // Simple string prompt
messages?: Array<{ role: 'system' | 'user' | 'assistant'; content: string }>; // Full message history
temperature?: number; // 0–1, default provider default
maxTokens?: number; // Max output tokens
}
Either prompt or messages must be provided. If both are given, messages takes precedence.
Returns: Promise<string> — the model's text response.
Token counts and cost are recorded on the enclosing step's database row.
ctx.tool(name, args)
Executes a tool call. When Surface is running, the call is governed before execution.
await ctx.tool('filesystem.writeFile', {
path: './output.md',
content: 'Hello',
});
Parameters:
name: string— tool name innamespace.actionformat.args: Record<string, unknown>— tool arguments.
Returns: Promise<unknown>
If Surface denies the call, an error is thrown. If Surface requires approval, the run pauses until a reviewer approves in the Surface console.
ctx.waitForApproval(reason)
Pauses the workflow until a human approves.
await ctx.waitForApproval('Review the draft before publishing');
Parameters:
reason: string— shown to the reviewer in Surface console or CLI.
Returns: Promise<void>
The run's status is set to paused. The workflow resumes when a signal named approval arrives. To approve from the CLI:
depth signal <run_id> approval
Times out after 5 minutes with an error.
ctx.waitForSignal(name)
Pauses the workflow until an arbitrary named signal arrives.
const payload = await ctx.waitForSignal('data-ready');
Parameters:
name: string— signal name to wait for.
Returns: Promise<unknown> — the signal's payload (parsed JSON if valid, otherwise raw string).
Send a signal:
depth signal <run_id> data-ready '{"rows": 42}'
Times out after 5 minutes with an error.
ctx.waitFor(durationMs)
Pauses the workflow for a duration.
await ctx.waitFor(30_000); // 30 seconds
Parameters:
durationMs: number— duration in milliseconds.
Returns: Promise<void>
RunResult<T>
Returned by WorkflowDefinition.run().
interface RunResult<T> {
runId: string;
status: 'running' | 'completed' | 'failed' | 'paused';
output?: T;
error?: string;
steps: StepRecord[];
totalCostUsd: number;
}
StepRecord
A completed step's persisted metadata.
interface StepRecord {
id: string;
name: string;
status: 'running' | 'completed' | 'failed' | 'skipped';
output?: unknown;
error?: string;
provider?: string; // 'anthropic', 'openai', 'ollama'
model?: string; // 'claude-sonnet-4-6', 'gpt-4o', etc.
inputTokens?: number;
outputTokens?: number;
costUsd?: number;
startedAt: string;
completedAt?: string;
}
CLI Commands
depth run <file.ts> [--input '<json>']
Compiles and runs a workflow file. The file must export a WorkflowDefinition as its default export.
depth status <run_id>
Prints status, timing, and step details for a run.
depth list
Prints the 20 most recent runs.
depth signal <run_id> <name> [payload_json]
Inserts a signal into the depth_signals table, unblocking any waitForSignal or waitForApproval waiting on that name.
depth resume <run_id>
Not yet implemented. Re-run the workflow file — it automatically resumes paused runs.
depth --version
Prints the CLI version.
depth --help
Prints usage information.
Environment Variables
| Variable | Default | Description |
|---|---|---|
ANTHROPIC_API_KEY | — | Required for anthropic:* models |
OPENAI_API_KEY | — | Required for openai:* models |
BLACKLAKE_SURFACE_URL | http://localhost:3100 | Surface endpoint for governance |
BLACKLAKE_DB_PATH | ~/.blacklake/blacklake.db | SQLite database path |