Rate Limiting
Rate limits, quotas, headers, and best practices for managing your TCG Price Lookup API usage.
Limits by plan
| Plan | Price | Requests/day | Burst limit |
|---|---|---|---|
| Free | $0 | 200 | 1 req / 3 seconds |
| Trader | $14.99/mo | 10,000 | 1 req / second |
| Business | $89.99/mo | 100,000 | 3 req / second |
Daily limits reset at midnight UTC. Burst limits apply per second — exceeding them triggers a temporary 429 even if you have daily quota remaining.
Rate limit headers
Every API response includes these headers so you can monitor your usage in real time:
X-RateLimit-Limit: 10000
X-RateLimit-Remaining: 9987
X-RateLimit-Reset: 1712764800
| Header | Description |
|---|---|
X-RateLimit-Limit | Your total daily request allowance |
X-RateLimit-Remaining | Requests remaining for today |
X-RateLimit-Reset | Unix timestamp when the limit resets (midnight UTC) |
Reading these headers in code:
const response = await fetch('https://api.tcgpricelookup.com/v1/search?q=charizard', {
headers: { 'X-API-Key': process.env.TCG_API_KEY }
});
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const resetAt = parseInt(response.headers.get('X-RateLimit-Reset'));
if (remaining < 50) {
const resetDate = new Date(resetAt * 1000);
console.warn(`Low quota: ${remaining} requests left. Resets at ${resetDate.toISOString()}`);
}
When you’re rate limited
You’ll receive a 429 Too Many Requests response:
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "You have exceeded your rate limit. Try again in 60 seconds.",
"status": 429
}
}
The Retry-After header tells you exactly how many seconds to wait:
HTTP/1.1 429 Too Many Requests
Retry-After: 42
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1712764800
Always wait for at least Retry-After seconds before retrying. Retrying immediately will result in another 429.
Use batch lookups
The single biggest win for reducing request count is using the batch endpoint for multi-card lookups.
// Bad: 20 separate requests = 20 quota units
for (const id of cardIds) {
const card = await tcg.getCard(id);
process(card);
}
// Good: 1 batch request = 1 quota unit
const cards = await tcg.batchLookup(cardIds); // up to 20 IDs per call
cards.data.forEach(process);
Batch lookups accept up to 20 card IDs per request. If you have more than 20 cards, split into chunks:
function chunk(arr, size) {
return Array.from({ length: Math.ceil(arr.length / size) }, (_, i) =>
arr.slice(i * size, i * size + size)
);
}
const chunks = chunk(allCardIds, 20);
const results = await Promise.all(chunks.map(ids => tcg.batchLookup(ids)));
const allCards = results.flatMap(r => r.data);
Cache responses
Card prices update periodically — not continuously. Caching for even a few minutes dramatically reduces your request count without sacrificing data freshness.
// Simple in-memory cache with TTL
const CACHE_TTL_MS = 5 * 60 * 1000; // 5 minutes
const cache = new Map();
async function getCardCached(id) {
const entry = cache.get(id);
if (entry && Date.now() - entry.timestamp < CACHE_TTL_MS) {
return entry.data;
}
const card = await tcg.getCard(id);
cache.set(id, { data: card, timestamp: Date.now() });
return card;
}
For production, use a proper cache store like Redis:
import { createClient } from 'redis';
const redis = createClient({ url: process.env.REDIS_URL });
async function getCardCached(id) {
const cached = await redis.get(`tcg:card:${id}`);
if (cached) return JSON.parse(cached);
const card = await tcg.getCard(id);
// Cache for 5 minutes (300 seconds)
await redis.setEx(`tcg:card:${id}`, 300, JSON.stringify(card));
return card;
}
Rate limiter middleware
If you’re building an app that makes many API calls, a client-side rate limiter prevents you from accidentally hitting burst limits:
class RateLimiter {
constructor(requestsPerSecond) {
this.queue = [];
this.interval = 1000 / requestsPerSecond;
this.lastCall = 0;
}
async acquire() {
const now = Date.now();
const wait = Math.max(0, this.lastCall + this.interval - now);
if (wait > 0) await new Promise(r => setTimeout(r, wait));
this.lastCall = Date.now();
}
}
const limiter = new RateLimiter(1); // 1 request per second (Trader plan)
async function getCard(id) {
await limiter.acquire();
return tcg.getCard(id);
}
Implement exponential backoff
When you do hit rate limits, back off exponentially with jitter to avoid thundering herd problems:
async function fetchWithRetry(fn, maxRetries = 4) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await fn();
} catch (err) {
if (err.status !== 429 || attempt === maxRetries - 1) throw err;
const baseDelay = err.retryAfter
? err.retryAfter * 1000
: Math.pow(2, attempt) * 1000; // 1s, 2s, 4s, 8s
const jitter = Math.random() * 500; // up to 500ms jitter
await new Promise(r => setTimeout(r, baseDelay + jitter));
}
}
}
Monitoring usage via dashboard
Your dashboard shows:
- Daily usage graph — requests over the past 30 days
- Current quota — how many requests remain today
- Reset countdown — time until your daily limit resets at midnight UTC
- Plan usage — percentage of your daily limit consumed
You can also check usage programmatically by inspecting the X-RateLimit-* headers on any response. No extra endpoint needed.
Need higher limits?
The free tier (200 req/day) is great for experimentation. When you’re ready to build something production-worthy:
- Trader ($14.99/month) — 10,000 requests/day + price history access
- Business ($89.99/month) — 100,000 requests/day + priority support