Quick Reference for AI Agents & Developers // Handle rate limit errors with retry
async function sendWithRetry ( message , maxRetries = 3 ) {
for ( let i = 0 ; i < maxRetries ; i ++ ) {
try {
return await CometChat . sendMessage ( message );
} catch ( error ) {
if ( error . code === "ERR_TOO_MANY_REQUESTS" ) {
const delay = error . details ?. retryAfter || Math . pow ( 2 , i ) * 1000 ;
await new Promise ( r => setTimeout ( r , delay ));
} else throw error ;
}
}
}
// Rate limits: Core ops 10,000/min, Standard ops 20,000/min
// Response headers: X-Rate-Limit, X-Rate-Limit-Remaining, Retry-After
CometChat enforces rate limits to ensure platform stability and fair usage across all applications.
Rate limits shown below are defaults. Limits can be adjusted based on your use case and plan. Contact CometChat Support for custom limits.
Rate Limit Overview
Operation Type Limit Examples Core Operations 10,000/min Login, create user, create group, join group Standard Operations 20,000/min Send message, fetch users, update profile
Rate limits are cumulative within each category. For example, if you login 5,000 users and create 5,000 groups in one minute, you’ve used your entire core operations quota.
Every API response includes rate limit information:
Header Description Example X-Rate-LimitYour total limit per minute 10000X-Rate-Limit-RemainingRequests remaining 9500Retry-AfterSeconds until reset (when limited) 15X-Rate-Limit-ResetUnix timestamp of reset 1625143246
Handling Rate Limits
When you exceed the rate limit, the API returns a 429 Too Many Requests response.
async function sendMessageWithRetry ( message , maxRetries = 3 ) {
for ( let attempt = 0 ; attempt < maxRetries ; attempt ++ ) {
try {
return await CometChat . sendMessage ( message );
} catch ( error ) {
if ( error . code === "ERR_TOO_MANY_REQUESTS" ) {
// Get retry delay from error or use exponential backoff
const retryAfter = error . details ?. retryAfter || Math . pow ( 2 , attempt ) * 1000 ;
console . log ( `Rate limited. Retrying in ${ retryAfter } ms...` );
await new Promise ( resolve => setTimeout ( resolve , retryAfter ));
} else {
throw error ;
}
}
}
throw new Error ( "Max retries exceeded" );
}
async function sendMessageWithRetry (
message : CometChat . TextMessage ,
maxRetries : number = 3
) : Promise < CometChat . TextMessage > {
for ( let attempt = 0 ; attempt < maxRetries ; attempt ++ ) {
try {
return await CometChat . sendMessage ( message );
} catch ( error : any ) {
if ( error . code === "ERR_TOO_MANY_REQUESTS" ) {
const retryAfter = error . details ?. retryAfter || Math . pow ( 2 , attempt ) * 1000 ;
console . log ( `Rate limited. Retrying in ${ retryAfter } ms...` );
await new Promise ( resolve => setTimeout ( resolve , retryAfter ));
} else {
throw error ;
}
}
}
throw new Error ( "Max retries exceeded" );
}
Best Practices
Implement Request Queuing
Queue requests and process them at a controlled rate to avoid hitting limits. class RequestQueue {
constructor ( requestsPerSecond = 100 ) {
this . queue = [];
this . interval = 1000 / requestsPerSecond ;
this . processing = false ;
}
add ( request ) {
return new Promise (( resolve , reject ) => {
this . queue . push ({ request , resolve , reject });
this . process ();
});
}
async process () {
if ( this . processing ) return ;
this . processing = true ;
while ( this . queue . length > 0 ) {
const { request , resolve , reject } = this . queue . shift ();
try {
resolve ( await request ());
} catch ( error ) {
reject ( error );
}
await new Promise ( r => setTimeout ( r , this . interval ));
}
this . processing = false ;
}
}
Batch Operations When Possible
Use batch APIs instead of individual requests: // Instead of multiple individual fetches
// Use pagination with larger limits
const usersRequest = new CometChat . UsersRequestBuilder ()
. setLimit ( 100 ) // Fetch more per request
. build ();
Cache Frequently Accessed Data
Cache user profiles, group info, and other data that doesn’t change often: const userCache = new Map ();
async function getUser ( uid ) {
if ( userCache . has ( uid )) {
return userCache . get ( uid );
}
const user = await CometChat . getUser ( uid );
userCache . set ( uid , user );
return user ;
}
Track rate limit headers to proactively manage usage: function checkRateLimitStatus ( response ) {
const remaining = response . headers ?.[ "X-Rate-Limit-Remaining" ];
const limit = response . headers ?.[ "X-Rate-Limit" ];
if ( remaining && limit ) {
const usagePercent = (( limit - remaining ) / limit ) * 100 ;
if ( usagePercent > 80 ) {
console . warn ( `Rate limit usage at ${ usagePercent . toFixed ( 1 ) } %` );
}
}
}
Common Scenarios
Scenario Recommendation Bulk user import Use REST API with batching, implement delays High-traffic chat rooms Use message pagination, cache messages Presence-heavy apps Limit presence subscriptions to visible users Notification systems Queue notifications, use batch sends
FAQ
Can I check my current rate limit?
There’s no dedicated endpoint, but every API response includes X-Rate-Limit and X-Rate-Limit-Remaining headers.
Do WebSocket messages count against rate limits?
No, rate limits apply to REST API calls only. Real-time messages via WebSocket have separate throttling.
Can I get higher rate limits?
Next Steps