Overview
Collection jobs are the programmatic way to run metrics on demand for one or many calls via the API. While policies automate collection for incoming calls, collection jobs let you trigger metric processing against existing calls — whether that’s backfilling a new metric across historical data or re-running after a definition change.When to Use Collection Jobs
- Backfill metrics — Run newly created metrics on historical calls
- Re-run metrics — Reprocess calls after updating a metric definition
- On-demand analysis — Collect metrics for a specific set of calls uploaded via API
- Scale up after testing — You’ve validated a metric in the Playground, now run it across many calls programmatically
Creating a Collection Job
Provide an array of call IDs and the metrics you want to collect:| Field | Type | Required | Description |
|---|---|---|---|
callIds | string[] | Yes | Array of call UUIDs to process (minimum 1) |
metrics | array | Yes | Metric definitions to collect (minimum 1) |
The
totalItems count equals callIds.length * metrics.length — each call-metric combination is a separate item.Job Lifecycle
Collection jobs progress through the following statuses:| Status | Description |
|---|---|
PENDING | Job created, waiting to start processing |
PROCESSING | Actively collecting metrics from calls |
COMPLETED | All items processed successfully |
FAILED | Job encountered errors (check errorMessage) |
CANCELED | Job was canceled before completion |
Progress Tracking
Monitor progress using these fields:| Field | Description |
|---|---|
totalItems | Total number of call-metric combinations to process |
completedItems | Number of items successfully processed |
failedItems | Number of items that failed |
startedAt | When processing began |
completedAt | When processing finished |
Monitoring Jobs
List Collection Jobs
| Parameter | Type | Description |
|---|---|---|
limit | number | Max results (1-50, default: 20) |
after | string | Cursor for pagination |
status | string | Filter by PENDING, PROCESSING, COMPLETED, FAILED, or CANCELED |
Get a Collection Job
Example: Playground to Production
A typical end-to-end workflow — test a metric in the Playground, then run it across calls via the SDK:Test in Playground
Use the Playground to test your metric prompt against a sample call. Iterate until you’re happy with the output.
What’s Next
Playground
Test metrics interactively before running at scale
Metric Policies
Automate metric collection for incoming calls

