Skip to content

RATE_LIMITED — Over per-key request limit

Your API key sent too many requests in the current window. Default limit is 60 requests per minute per key. Enterprise keys can be raised on request.

429 Too Many Requests. Standard envelope. Includes a Retry-After header (in seconds).

HTTP/1.1 429 Too Many Requests
Retry-After: 18
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1714579200
{
"success": false,
"error": {
"code": "RATE_LIMITED",
"message": "Rate limit exceeded. Retry after 18 seconds.",
"request_id": "req_01HXJZK4ABCDEF",
"doc_url": "https://docs.surveycoder.io/errors/rate-limited"
}
}
  • A burst of parallel requests overran the bucket.
  • A retry loop without backoff hammered the API.
  • Multiple services share one API key and collectively exceed the limit.
  1. Respect Retry-After. Sleep that many seconds, then retry.
  2. Add exponential backoff to any retry logic — 1s, 2s, 4s, 8s, capped at the Retry-After value.
  3. Use per-service keys. Splitting one key across many services makes a single noisy neighbor take down everyone.
  4. Batch when possible. Most coding workflows benefit from larger batches anyway — 200 responses in one call costs the same credits and uses one request quota slot.
  5. For sustained loads above 60 RPM, contact sales for an enterprise limit raise.
async function withBackoff<T>(fn: () => Promise<T>, max = 5): Promise<T> {
let attempt = 0;
while (true) {
try {
return await fn();
} catch (err: any) {
if (err.code !== 'RATE_LIMITED' || attempt >= max) throw err;
const wait = Math.min(err.retryAfter ?? 2 ** attempt, 30) * 1000;
await new Promise((r) => setTimeout(r, wait));
attempt++;
}
}
}
  • Idempotency — safe to combine with backoff
  • Credits — rate limits and credits are separate quotas