Rate Limits

Understanding API rate limits and how to handle them in your integrations.

Overview

Regen Therapy uses rate limiting to ensure fair usage and maintain API stability for all users. Rate limits are applied per API key and are measured using a sliding window algorithm.

Window

1 Hour

Rolling window period

Read Requests

1,000 / hour

GET requests

Write Requests

200 / hour

POST, PUT, DELETE requests

Limits by Endpoint

Endpoint CategoryMethodRate LimitBurst Limit
OrdersGET100 / hour20 / minute
OrdersPOST50 / hour10 / minute
ProductsGET200 / hour50 / minute
ProductsPOST/PUT50 / hour10 / minute
InventoryGET200 / hour50 / minute
InventoryPUT100 / hour20 / minute
WebhooksGET100 / hour20 / minute
WebhooksPOST20 / hour5 / minute
API KeysPOST10 / hour3 / minute
Commission PayoutsPOST20 / hour5 / minute

Rate Limit Headers

Every API response includes headers that provide information about your current rate limit status.

Response Headers
HeaderDescriptionExample
X-RateLimit-LimitMaximum requests allowed in the current window100
X-RateLimit-RemainingRequests remaining in the current window87
X-RateLimit-ResetUnix timestamp when the rate limit resets1714567890
Retry-AfterSeconds to wait before retrying (only on 429)120
Example Response
HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1714567890

{
  "success": true,
  "data": { ... }
}

Handling Rate Limits

Rate Limit Error Response
HTTP 429 Too Many Requests
{
  "success": false,
  "error": {
    "code": "RATE_LIMITED",
    "message": "Too many requests. Please retry after 120 seconds."
  },
  "meta": {
    "timestamp": "2024-04-17T10:30:00Z",
    "requestId": "req_abc123xyz"
  }
}
Implementing Retry Logic (JavaScript)
async function fetchWithRetry(url, options, maxRetries = 3) {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    const response = await fetch(url, options);
    
    if (response.status === 429) {
      const retryAfter = response.headers.get('Retry-After');
      const waitTime = retryAfter 
        ? parseInt(retryAfter) * 1000 
        : Math.pow(2, attempt) * 1000; // Exponential backoff
      
      console.log(`Rate limited. Waiting ${waitTime/1000}s before retry...`);
      await new Promise(resolve => setTimeout(resolve, waitTime));
      continue;
    }
    
    return response;
  }
  
  throw new Error('Max retries exceeded');
}
Implementing Retry Logic (Python)
import time
import requests
from typing import Optional

def fetch_with_retry(url: str, headers: dict, max_retries: int = 3) -> requests.Response:
    for attempt in range(max_retries + 1):
        response = requests.get(url, headers=headers)
        
        if response.status_code == 429:
            retry_after = response.headers.get('Retry-After')
            wait_time = int(retry_after) if retry_after else (2 ** attempt)
            
            print(f"Rate limited. Waiting {wait_time}s before retry...")
            time.sleep(wait_time)
            continue
        
        return response
    
    raise Exception("Max retries exceeded")

Enterprise Rate Limits

Enterprise customers can request increased rate limits based on their integration needs.

TierRead RequestsWrite RequestsFeatures
Standard1,000 / hour200 / hourDefault tier for all accounts
Professional5,000 / hour1,000 / hourPriority support, dedicated endpoints
EnterpriseCustomCustomSLA guarantees, dedicated infrastructure

Best Practices

Cache Responses

Cache API responses where appropriate to reduce the number of requests. Product catalogs and inventory levels can often be cached for short periods.

Use Webhooks

Instead of polling for updates, use webhooks to receive real-time notifications about events like order status changes.

Batch Operations

Use batch endpoints where available to perform multiple operations in a single request, reducing your overall request count.

Monitor Usage

Track the rate limit headers in your responses to proactively manage your request rate and avoid hitting limits.