This guide walks through generating large volumes of content — 80 to 150+ articles — across multiple dealership sites, each with its own API key and webhook endpoint.
Batch Pipeline
The bulk workflow follows a three-phase pipeline from keywords to published content:
Prerequisites
- An API key for each site (with
ideaclouds:write and content:write scopes)
- A webhook configured on each site to receive
content.completed and content.failed events
- Completed IdeaClouds for each article you want to generate
If you still need to create IdeaClouds in bulk, use the Create IdeaClouds (batch) endpoint first. It accepts up to 25 keywords per request.
Step 1: Prepare Batch Payloads
Group your IdeaCloud IDs into batches of up to 10 (the maximum for Create content (batch)):
# batch_01.json
{
"items": [
{ "ideacloud_id": "ic-uuid-001", "article_type": "basic", "auto_compliance": true, "auto_content_tools": true },
{ "ideacloud_id": "ic-uuid-002", "article_type": "basic", "auto_compliance": true, "auto_content_tools": true },
{ "ideacloud_id": "ic-uuid-003", "article_type": "qa", "auto_compliance": true, "auto_content_tools": true }
]
}
For 100 articles, you’ll have 10 batch files. For 150, you’ll have 15.
Step 2: Submit Batches with Throttling
Single Site
Multi-Site (Parallel)
Send each batch with a pause between requests to stay well under the 60 requests/minute default limit:API_KEY="hzk_your_key_here"
BASE_URL="https://api.app.hrizn.io/v1/public/content/batch"
for batch_file in batch_*.json; do
HTTP_CODE=$(curl -s -o response.json -w "%{http_code}" \
-X POST "$BASE_URL" \
-H "X-API-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d @"$batch_file")
if [ "$HTTP_CODE" -eq 202 ]; then
echo "Submitted: $batch_file"
else
echo "Error $HTTP_CODE on $batch_file"
cat response.json
fi
sleep 2
done
At 2-second intervals, 15 batches (150 articles) complete in about 30 seconds — using only 15 of your 60 requests/minute budget. Since rate limits are per API key, you can run all sites concurrently. Each site processes its batches sequentially:process_site() {
local api_key="$1"
local site_name="$2"
local batch_dir="$3"
for batch_file in "$batch_dir"/batch_*.json; do
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
-X POST "https://api.app.hrizn.io/v1/public/content/batch" \
-H "X-API-Key: $api_key" \
-H "Content-Type: application/json" \
-d @"$batch_file")
echo "[$site_name] $batch_file → $HTTP_CODE"
sleep 2
done
}
# Launch all sites in parallel
process_site "hzk_site_a_key" "Site A" "./batches/site-a" &
process_site "hzk_site_b_key" "Site B" "./batches/site-b" &
process_site "hzk_site_c_key" "Site C" "./batches/site-c" &
# ... add all sites
wait
echo "All sites submitted"
When running many sites concurrently, each site still has its own rate limit. However, all sites share the same generation infrastructure — submitting thousands of articles simultaneously may result in longer generation times.
Step 3: Handle Rate Limit Errors
If you receive a 429, use the Retry-After header to wait and retry with exponential backoff:
MAX_RETRIES=5
submit_batch() {
local api_key="$1"
local batch_file="$2"
local retries=0
local backoff=5
while [ "$retries" -le "$MAX_RETRIES" ]; do
HTTP_CODE=$(curl -s -o response.json -w "%{http_code}" \
-D headers.txt \
-X POST "https://api.app.hrizn.io/v1/public/content/batch" \
-H "X-API-Key: $api_key" \
-H "Content-Type: application/json" \
-d @"$batch_file")
if [ "$HTTP_CODE" -eq 202 ]; then
echo "OK: $batch_file"
return 0
elif [ "$HTTP_CODE" -eq 429 ]; then
retries=$((retries + 1))
retry_after=$(grep -i 'retry-after' headers.txt | tr -d '\r' | awk '{print $2}')
wait_time=${retry_after:-$backoff}
echo "Rate limited on $batch_file — waiting ${wait_time}s (retry $retries/$MAX_RETRIES)"
sleep "$wait_time"
backoff=$((backoff * 2))
else
echo "Failed $HTTP_CODE: $batch_file"
return 1
fi
done
echo "Gave up after $MAX_RETRIES retries: $batch_file"
return 1
}
Step 4: Track Progress with Webhooks
Each site’s webhook receives events as articles generate. Log the article IDs from the batch response to correlate with incoming webhook payloads:
// Webhook payload when an article finishes
{
"type": "content.completed",
"site_id": "550e8400-...",
"data": {
"article_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"article_type": "basic"
}
}
To track real-time progress during generation, subscribe to content.progress events as well. See Content Progress Events for details.
Tracking Completion
Keep a simple counter of submitted vs. completed articles per site:
# Pseudo-code for your webhook handler
submitted = {"site-a": 100, "site-b": 80, "site-c": 120}
completed = {"site-a": 0, "site-b": 0, "site-c": 0}
failed = {"site-a": 0, "site-b": 0, "site-c": 0}
def handle_webhook(event):
site = event["site_id"]
if event["type"] == "content.completed":
completed[site] += 1
elif event["type"] == "content.failed":
failed[site] += 1
print(f"{site}: {completed[site]}/{submitted[site]} done, {failed[site]} failed")
Capacity Planning
| Sites | Articles/Site | Batches/Site | Time/Site (2s gap) | Total Time (parallel) |
|---|
| 1 | 100 | 10 | ~20s | ~20s |
| 5 | 100 | 10 | ~20s | ~20s |
| 12 | 150 | 15 | ~30s | ~30s |
| 12 | 100 | 10 | ~20s | ~20s |
All sites run concurrently, so total submission time stays constant regardless of how many sites you have.
Submission time is how long it takes to send all requests. Actual content generation runs asynchronously and typically takes 1-3 minutes per article depending on type and complexity.
Checklist
Create IdeaClouds
Use the batch endpoint to research all topics. Wait for ideacloud.completed webhooks before proceeding.
Prepare batch files
Group completed IdeaCloud IDs into JSON files of up to 10 items each.
Submit with throttling
Run all sites in parallel, each sending batches sequentially with 2-second gaps.
Handle 429 errors
Respect Retry-After headers and retry with exponential backoff.
Monitor via webhooks
Track content.completed and content.failed events to know when everything is done.
Bulk generation checklist complete. You’ve set up IdeaClouds, prepared batches, implemented throttling with retry logic, and configured webhook-based progress tracking.