Rate Limits & Quotas
Current Rate Limits
Rate limiting is now enforced for all API requests. The following limits apply per API key:
Active Limits
- 60 requests per minute (per API key)
- 1000 requests per hour (per API key)
- 10,000 requests per day (per API key)
All API responses include rate limit headers to help you track your usage.
Rate Limit Headers
Every API response includes these headers:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 59
X-RateLimit-Reset: 1640995200
Header Descriptions
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum number of requests allowed in the current time window |
X-RateLimit-Remaining | Number of requests remaining in current window |
X-RateLimit-Reset | Unix timestamp (seconds) when the rate limit resets |
Note: The limit shown corresponds to the most restrictive tier (per-minute limit).
Rate Limit Errors
When you exceed the rate limit, you'll receive a 429 Too Many Requests response:
{
"success": false,
"error": "Rate limit exceeded",
"message": "Too many requests. Please try again in 30 seconds.",
"retryAfter": 30,
"limit": 60,
"reset": 1640995200
}
Response Headers:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640995200
Retry-After: 30
How Rate Limiting Works
The API uses a multi-tier sliding window approach:
- Per-Minute Limit: 60 requests per 60-second window
- Per-Hour Limit: 1000 requests per 60-minute window
- Per-Day Limit: 10,000 requests per 24-hour window
Each request is checked against all three tiers. If any tier is exceeded, the request is rejected with a 429 error.
Best Practices
1. Implement Exponential Backoff
async function fetchWithBackoff(url, options, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After') || Math.pow(2, i);
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}
2. Use Queue Systems
For bulk processing, implement a queue system:
class RequestQueue {
constructor(maxPerMinute = 60) {
this.queue = [];
this.maxPerMinute = maxPerMinute;
this.requestTimes = [];
}
async add(requestFn) {
this.queue.push(requestFn);
return this.process();
}
async process() {
if (this.queue.length === 0) return;
// Remove timestamps older than 1 minute
const oneMinuteAgo = Date.now() - 60000;
this.requestTimes = this.requestTimes.filter(time => time > oneMinuteAgo);
// Check if we can make a request
if (this.requestTimes.length >= this.maxPerMinute) {
const oldestRequest = this.requestTimes[0];
const waitTime = 60000 - (Date.now() - oldestRequest);
await new Promise(resolve => setTimeout(resolve, waitTime));
return this.process();
}
// Execute request
const requestFn = this.queue.shift();
this.requestTimes.push(Date.now());
return await requestFn();
}
}
// Usage
const queue = new RequestQueue(60);
const results = await Promise.all([
queue.add(() => generateBlueprint(data1)),
queue.add(() => generateBlueprint(data2)),
queue.add(() => generateBlueprint(data3))
]);
3. Cache Responses
Cache responses when appropriate to reduce API calls:
const cache = new Map();
async function generateBlueprintCached(birthData) {
const cacheKey = JSON.stringify(birthData);
if (cache.has(cacheKey)) {
return cache.get(cacheKey);
}
const result = await generateBlueprint(birthData);
cache.set(cacheKey, result);
return result;
}
4. Batch Requests
If processing multiple blueprints, space them out:
async function batchGenerate(birthDataArray) {
const results = [];
const batchSize = 10;
const delayBetweenBatches = 1000; // 1 second
for (let i = 0; i < birthDataArray.length; i += batchSize) {
const batch = birthDataArray.slice(i, i + batchSize);
const batchResults = await Promise.all(
batch.map(data => generateBlueprint(data))
);
results.push(...batchResults);
// Wait before next batch
if (i + batchSize < birthDataArray.length) {
await new Promise(resolve => setTimeout(resolve, delayBetweenBatches));
}
}
return results;
}
Credit Quotas
Each API call consumes credits from your account:
| Mode | Credits Required |
|---|---|
| With AI | 500 credits |
| Data Only | 45 credits |
Monitoring Credits
Check your remaining credits in the API response:
{
"meta": {
"creditsUsed": 500,
"creditsRemaining": 1500
}
}
Insufficient Credits
When you run out of credits, you'll receive a 402 Payment Required error:
{
"success": false,
"error": "Insufficient credits. Required: 500, Available: 250"
}
Purchase additional credits from your credits page.