Rate Limits
Understanding API rate limits and how to handle them in your integrations.
Overview
Regen Therapy uses rate limiting to ensure fair usage and maintain API stability for all users. Rate limits are applied per API key and are measured using a sliding window algorithm.
1 Hour
Rolling window period
1,000 / hour
GET requests
200 / hour
POST, PUT, DELETE requests
Limits by Endpoint
| Endpoint Category | Method | Rate Limit | Burst Limit |
|---|---|---|---|
| Orders | GET | 100 / hour | 20 / minute |
| Orders | POST | 50 / hour | 10 / minute |
| Products | GET | 200 / hour | 50 / minute |
| Products | POST/PUT | 50 / hour | 10 / minute |
| Inventory | GET | 200 / hour | 50 / minute |
| Inventory | PUT | 100 / hour | 20 / minute |
| Webhooks | GET | 100 / hour | 20 / minute |
| Webhooks | POST | 20 / hour | 5 / minute |
| API Keys | POST | 10 / hour | 3 / minute |
| Commission Payouts | POST | 20 / hour | 5 / minute |
Rate Limit Headers
Every API response includes headers that provide information about your current rate limit status.
| Header | Description | Example |
|---|---|---|
| X-RateLimit-Limit | Maximum requests allowed in the current window | 100 |
| X-RateLimit-Remaining | Requests remaining in the current window | 87 |
| X-RateLimit-Reset | Unix timestamp when the rate limit resets | 1714567890 |
| Retry-After | Seconds to wait before retrying (only on 429) | 120 |
HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1714567890
{
"success": true,
"data": { ... }
}Handling Rate Limits
{
"success": false,
"error": {
"code": "RATE_LIMITED",
"message": "Too many requests. Please retry after 120 seconds."
},
"meta": {
"timestamp": "2024-04-17T10:30:00Z",
"requestId": "req_abc123xyz"
}
}async function fetchWithRetry(url, options, maxRetries = 3) {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After');
const waitTime = retryAfter
? parseInt(retryAfter) * 1000
: Math.pow(2, attempt) * 1000; // Exponential backoff
console.log(`Rate limited. Waiting ${waitTime/1000}s before retry...`);
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}import time
import requests
from typing import Optional
def fetch_with_retry(url: str, headers: dict, max_retries: int = 3) -> requests.Response:
for attempt in range(max_retries + 1):
response = requests.get(url, headers=headers)
if response.status_code == 429:
retry_after = response.headers.get('Retry-After')
wait_time = int(retry_after) if retry_after else (2 ** attempt)
print(f"Rate limited. Waiting {wait_time}s before retry...")
time.sleep(wait_time)
continue
return response
raise Exception("Max retries exceeded")Enterprise Rate Limits
Enterprise customers can request increased rate limits based on their integration needs.
| Tier | Read Requests | Write Requests | Features |
|---|---|---|---|
| Standard | 1,000 / hour | 200 / hour | Default tier for all accounts |
| Professional | 5,000 / hour | 1,000 / hour | Priority support, dedicated endpoints |
| Enterprise | Custom | Custom | SLA guarantees, dedicated infrastructure |
Best Practices
Cache API responses where appropriate to reduce the number of requests. Product catalogs and inventory levels can often be cached for short periods.
Instead of polling for updates, use webhooks to receive real-time notifications about events like order status changes.
Use batch endpoints where available to perform multiple operations in a single request, reducing your overall request count.
Track the rate limit headers in your responses to proactively manage your request rate and avoid hitting limits.