Batch Processing

Batch endpoints let you submit up to one thousand files in a single API call. Each file is enqueued as an independent run and processed asynchronously — use webhooks to know when the batch completes, then fetch results via the List endpoints.

Batch endpoints are available for all three document processor types:

OperationCreate batchGet batch status
ExtractPOST /extract_runs/batchGET /batch_runs/{id}
ClassifyPOST /classify_runs/batchGET /batch_runs/{id}
SplitPOST /split_runs/batchGET /batch_runs/{id}

When to use batch endpoints

Use batch endpoints when you have many files to process — for example, end-of-day ingestion pipelines, bulk backfills, or any high-volume async workload.

Batch submissions are placed into a delayed queue and given a lower default priority than single runs. This ensures that batch workloads do not interfere with interactive use from the dashboard or single-file API calls.

For low-volume or low-latency use cases, the single-file async endpoints with polling or webhooks are the right choice.

How it works

  1. Submit — Send a POST request with your processor ID and an inputs array (1–1,000 items). The endpoint returns immediately with a batch object in PENDING status.
  2. Process — Each item in inputs is enqueued as an independent run. The batch transitions through PENDING → PROCESSING → PROCESSED (or FAILED / CANCELLED).
  3. Consume results — Subscribe to webhooks to be notified when the batch completes, then fetch individual run results via the List endpoints filtered by batchId.

Inline processor configuration (config) is not supported for batch requests. You must reference an existing processor by id.


Batch extraction

1import { ExtendClient } from "extend-ai";
2
3const client = new ExtendClient({ token: "YOUR_API_KEY" });
4
5const batch = await client.extractRuns.createBatch({
6 extractor: {
7 id: "ex_xK9mLPqRtN3vS8wF5hB2cQ",
8 // version: "1.0", // optional — defaults to "latest"
9 // overrideConfig: { ... }, // optional — partial config override
10 },
11 inputs: [
12 {
13 file: { url: "https://example.com/invoice1.pdf" },
14 metadata: { customerId: "cust_abc123" },
15 },
16 {
17 file: { url: "https://example.com/invoice2.pdf" },
18 metadata: { customerId: "cust_def456" },
19 },
20 {
21 file: { id: "file_ghi789" }, // previously uploaded Extend file ID
22 },
23 ],
24});
25
26console.log(batch.id); // "bpr_Xj8mK2pL9nR4vT7qY5wZ"
27console.log(batch.status); // "PENDING"
28
29// Poll the batch status
30const status = await client.batchRuns.get(batch.id);
31console.log(status.status); // "PENDING" | "PROCESSING" | "PROCESSED" | "FAILED" | "CANCELLED"
32console.log(status.runCount); // number of individual runs created

Batch classification

1import { ExtendClient } from "extend-ai";
2
3const client = new ExtendClient({ token: "YOUR_API_KEY" });
4
5const batch = await client.classifyRuns.createBatch({
6 classifier: {
7 id: "cl_xK9mLPqRtN3vS8wF5hB2cQ",
8 // version: "2.0", // optional — defaults to "latest"
9 // overrideConfig: { ... }, // optional — partial config override
10 },
11 inputs: [
12 {
13 file: { url: "https://example.com/document1.pdf" },
14 metadata: { source: "email-inbox" },
15 },
16 {
17 file: { url: "https://example.com/document2.pdf" },
18 metadata: { source: "upload-portal" },
19 },
20 ],
21});
22
23console.log(batch.id); // "bpr_Xj8mK2pL9nR4vT7qY5wZ"
24console.log(batch.status); // "PENDING"

Batch splitting

Raw text input ({ text: "..." }) is not supported for split runs. Provide files as a URL ({ url: "..." }) or an Extend file ID ({ id: "..." }).

1import { ExtendClient } from "extend-ai";
2
3const client = new ExtendClient({ token: "YOUR_API_KEY" });
4
5const batch = await client.splitRuns.createBatch({
6 splitter: {
7 id: "spl_xK9mLPqRtN3vS8wF5hB2cQ",
8 // version: "1.0", // optional — defaults to "latest"
9 // overrideConfig: { ... }, // optional — partial config override
10 },
11 inputs: [
12 {
13 file: { url: "https://example.com/multi-doc1.pdf" },
14 metadata: { batchRef: "daily-run-2026-04-07" },
15 },
16 {
17 file: { url: "https://example.com/multi-doc2.pdf" },
18 metadata: { batchRef: "daily-run-2026-04-07" },
19 },
20 ],
21});
22
23console.log(batch.id); // "bpr_Xj8mK2pL9nR4vT7qY5wZ"
24console.log(batch.status); // "PENDING"

Monitoring results

Webhooks

Subscribe to batch completion events to be notified when a batch finishes. The webhook payload includes the batchId but not individual run results — use the List endpoints (below) to fetch those once notified.

All batch operations (extract, classify, split) emit the same event types. Use the processorType field in the payload to determine which type of batch completed.

EventDescription
batch_processor_run.processedTriggered when a batch has finished processing
batch_processor_run.failedTriggered when a batch has failed

See Webhook Configuration for setup instructions.

Checking batch status

You can poll the GET batch endpoint at any time to check the current status of a batch. The same endpoint works for all processor types — use the processorType field in the response to determine which type of batch it is:

$GET /batch_runs/{batchId}

The batch status field reflects the aggregate state:

StatusMeaning
PENDINGQueued, not yet started
PROCESSINGRuns are actively being processed
PROCESSEDAll runs have completed
FAILEDThe batch encountered a fatal error
CANCELLEDThe batch was cancelled

Fetching individual results

Once the batch has completed, retrieve individual run results using the List endpoints with a batchId filter:

$GET /extract_runs?batchId={batchId}
$GET /classify_runs?batchId={batchId}
$GET /split_runs?batchId={batchId}