Skip to main content
This guide walks through generating large volumes of content — 80 to 150+ articles — across multiple dealership sites, each with its own API key and webhook endpoint.

Batch Pipeline

The bulk workflow follows a three-phase pipeline from keywords to published content:

Prerequisites

  • An API key for each site (with ideaclouds:write and content:write scopes)
  • A webhook configured on each site to receive content.completed and content.failed events
  • Completed IdeaClouds for each article you want to generate
If you still need to create IdeaClouds in bulk, use the Create IdeaClouds (batch) endpoint first. It accepts up to 25 keywords per request.

Step 1: Prepare Batch Payloads

Group your IdeaCloud IDs into batches of up to 10 (the maximum for Create content (batch)):
# batch_01.json
{
  "items": [
    { "ideacloud_id": "ic-uuid-001", "article_type": "basic", "auto_compliance": true, "auto_content_tools": true },
    { "ideacloud_id": "ic-uuid-002", "article_type": "basic", "auto_compliance": true, "auto_content_tools": true },
    { "ideacloud_id": "ic-uuid-003", "article_type": "qa", "auto_compliance": true, "auto_content_tools": true }
  ]
}
For 100 articles, you’ll have 10 batch files. For 150, you’ll have 15.

Step 2: Submit Batches with Throttling

Send each batch with a pause between requests to stay well under the 60 requests/minute default limit:
API_KEY="hzk_your_key_here"
BASE_URL="https://api.app.hrizn.io/v1/public/content/batch"

for batch_file in batch_*.json; do
  HTTP_CODE=$(curl -s -o response.json -w "%{http_code}" \
    -X POST "$BASE_URL" \
    -H "X-API-Key: $API_KEY" \
    -H "Content-Type: application/json" \
    -d @"$batch_file")

  if [ "$HTTP_CODE" -eq 202 ]; then
    echo "Submitted: $batch_file"
  else
    echo "Error $HTTP_CODE on $batch_file"
    cat response.json
  fi

  sleep 2
done
At 2-second intervals, 15 batches (150 articles) complete in about 30 seconds — using only 15 of your 60 requests/minute budget.
When running many sites concurrently, each site still has its own rate limit. However, all sites share the same generation infrastructure — submitting thousands of articles simultaneously may result in longer generation times.

Step 3: Handle Rate Limit Errors

If you receive a 429, use the Retry-After header to wait and retry with exponential backoff:
MAX_RETRIES=5

submit_batch() {
  local api_key="$1"
  local batch_file="$2"
  local retries=0
  local backoff=5

  while [ "$retries" -le "$MAX_RETRIES" ]; do
    HTTP_CODE=$(curl -s -o response.json -w "%{http_code}" \
      -D headers.txt \
      -X POST "https://api.app.hrizn.io/v1/public/content/batch" \
      -H "X-API-Key: $api_key" \
      -H "Content-Type: application/json" \
      -d @"$batch_file")

    if [ "$HTTP_CODE" -eq 202 ]; then
      echo "OK: $batch_file"
      return 0
    elif [ "$HTTP_CODE" -eq 429 ]; then
      retries=$((retries + 1))
      retry_after=$(grep -i 'retry-after' headers.txt | tr -d '\r' | awk '{print $2}')
      wait_time=${retry_after:-$backoff}
      echo "Rate limited on $batch_file — waiting ${wait_time}s (retry $retries/$MAX_RETRIES)"
      sleep "$wait_time"
      backoff=$((backoff * 2))
    else
      echo "Failed $HTTP_CODE: $batch_file"
      return 1
    fi
  done

  echo "Gave up after $MAX_RETRIES retries: $batch_file"
  return 1
}

Step 4: Track Progress with Webhooks

Each site’s webhook receives events as articles generate. Log the article IDs from the batch response to correlate with incoming webhook payloads:
// Webhook payload when an article finishes
{
  "type": "content.completed",
  "site_id": "550e8400-...",
  "data": {
    "article_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
    "article_type": "basic"
  }
}
To track real-time progress during generation, subscribe to content.progress events as well. See Content Progress Events for details.

Tracking Completion

Keep a simple counter of submitted vs. completed articles per site:
# Pseudo-code for your webhook handler
submitted = {"site-a": 100, "site-b": 80, "site-c": 120}
completed = {"site-a": 0, "site-b": 0, "site-c": 0}
failed = {"site-a": 0, "site-b": 0, "site-c": 0}

def handle_webhook(event):
    site = event["site_id"]
    if event["type"] == "content.completed":
        completed[site] += 1
    elif event["type"] == "content.failed":
        failed[site] += 1
    print(f"{site}: {completed[site]}/{submitted[site]} done, {failed[site]} failed")

Capacity Planning

SitesArticles/SiteBatches/SiteTime/Site (2s gap)Total Time (parallel)
110010~20s~20s
510010~20s~20s
1215015~30s~30s
1210010~20s~20s
All sites run concurrently, so total submission time stays constant regardless of how many sites you have.
Submission time is how long it takes to send all requests. Actual content generation runs asynchronously and typically takes 1-3 minutes per article depending on type and complexity.

Checklist

1

Create IdeaClouds

Use the batch endpoint to research all topics. Wait for ideacloud.completed webhooks before proceeding.
2

Prepare batch files

Group completed IdeaCloud IDs into JSON files of up to 10 items each.
3

Submit with throttling

Run all sites in parallel, each sending batches sequentially with 2-second gaps.
4

Handle 429 errors

Respect Retry-After headers and retry with exponential backoff.
5

Monitor via webhooks

Track content.completed and content.failed events to know when everything is done.
Bulk generation checklist complete. You’ve set up IdeaClouds, prepared batches, implemented throttling with retry logic, and configured webhook-based progress tracking.
Last modified on March 1, 2026