Onboarder

API Documentation

Rate Limits

Understanding API rate limits and how to handle them effectively.

Overview

Rate limits protect the Onboarder API from abuse and ensure fair usage across all clients. A default limit of 100 requests per 60 seconds is applied to all endpoints, enforced per API key or OAuth access token.

Simple and consistent: One rate limit applies to all endpoints. Each API key or OAuth token has its own independent rate limit bucket with a rolling 60-second window.

Current Rate Limits

All API endpoints have a default rate limit applied uniformly across the platform.

100 requests

per 60 seconds

Applied per API key or access token

Rolling Window

Rate limits reset on a rolling 60-second window, not at fixed intervals.

Per Identifier

Each API key or OAuth access token has its own independent rate limit bucket.

IP Fallback

If no API key is provided, rate limits are applied per IP address.

Rate Limit Headers

Every API response includes rate limit information in the headers:

HeaderDescription
X-RateLimit-LimitMaximum number of requests allowed in the time window
X-RateLimit-RemainingNumber of requests remaining in the current window
X-RateLimit-ResetUnix timestamp when the rate limit window resets
Retry-AfterSeconds to wait before retrying (included in 429 responses)

Example Headers

HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1705320000
Content-Type: application/json

Rate Limit Exceeded Response

When you exceed the rate limit, the API returns a 429 Too Many Requests status:

HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1705320000
Retry-After: 45
Content-Type: application/json
{
"error": "rate_limit_exceeded",
"error_description": "Too many requests. Please retry after some time",
"error_code": "RATE_001",
"retry_after": 45
}

Handling Rate Limits

Implement exponential backoff with retry logic when you receive a 429 response. This approach automatically retries failed requests while respecting rate limit headers and preventing overwhelming the API.

Exponential Backoff Implementation

async function makeRequestWithRetry(url, options, maxRetries = 3) {
let retries = 0;
while (retries < maxRetries) {
try {
const response = await fetch(url, options);
// Check rate limit headers
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
if (remaining < 10) {
console.warn(`Rate limit warning: ${remaining} requests remaining`);
}
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After')) || 60;
const waitTime = Math.min(retryAfter * 1000, 60000); // Max 60 seconds
console.log(`Rate limited. Waiting ${waitTime}ms before retry...`);
await new Promise(resolve => setTimeout(resolve, waitTime));
retries++;
continue;
}
return response;
} catch (error) {
if (retries === maxRetries - 1) throw error;
// Exponential backoff
const backoffTime = Math.pow(2, retries) * 1000;
await new Promise(resolve => setTimeout(resolve, backoffTime));
retries++;
}
}
throw new Error('Max retries exceeded');
}
// Usage
const response = await makeRequestWithRetry('https://api.onboarder.com/api/v1/kyc/verify', {
method: 'POST',
headers: {
'Authorization': `Bearer ${accessToken}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(verificationData)
});
const data = await response.json();
console.log(data);

Need Higher Limits?

The default rate limit of 100 requests per minute is suitable for most applications. If you require higher limits, contact our team to discuss your use case.

We can configure custom rate limits based on:

  • Expected traffic volume and usage patterns
  • Business requirements and SLAs
  • Enterprise infrastructure needs

Contact support@onboarder.com to discuss custom rate limits.

Best Practices

  • Monitor rate limit headers and implement warnings before hitting limits
  • Implement exponential backoff with jitter for retries
  • Cache responses when appropriate to reduce API calls
  • Batch requests when possible to stay within limits
  • Use webhooks instead of polling for real-time updates
  • Implement request queuing for high-volume applications