Rate limiting is a crucial aspect of working with any API. Fluxer.Net includes built-in client-side rate limiting to help prevent your bot from being temporarily or permanently banned.
Rate limiting is a mechanism that restricts the number of API requests you can make within a specific time window. Fluxer enforces rate limits to:
When you exceed a rate limit, the Fluxer API will return an HTTP 429 (Too Many Requests) response, and your bot may be temporarily blocked from making further requests.
Fluxer enforces several types of rate limits:
Applies to all API requests across your entire application.
Different API endpoints have different rate limits.
Some endpoints share rate limit buckets based on resource IDs:
Fluxer.Net implements client-side rate limiting using a sliding window algorithm to prevent hitting server-side rate limits.
Rate limiting is enabled by default:
var config = new FluxerConfig
{
EnableRateLimiting = true // Default value
};
var apiClient = new ApiClient(token, config);
// Example: Sending multiple messages
for (int i = 0; i < 10; i++)
{
// Fluxer.Net automatically waits if needed
await apiClient.SendMessage(channelId, new Message
{
Content = $"Message {i}"
});
// No manual delay needed!
}
The RateLimitManager is configured with predefined limits for different endpoint types:
// Sending messages: 5 per 5 seconds per channel
// Automatically handled by Fluxer.Net
await apiClient.SendMessage(channelId, message);
// Adding reactions: 1 per 0.25 seconds (4/sec) per channel
await apiClient.AddReaction(channelId, messageId, "👍");
// Most other endpoints: 50 per second globally
await apiClient.GetCurrentUser();
Never disable rate limiting in production:
// Good: Keep rate limiting enabled
var config = new FluxerConfig
{
EnableRateLimiting = true // Default
};
// Bad: Disabling rate limiting (only for testing)
var config = new FluxerConfig
{
EnableRateLimiting = false // Don't do this in production!
};
Use bulk operations when available instead of making many individual requests:
// Bad: Individual delete requests
foreach (var messageId in messageIds)
{
await apiClient.DeleteMessage(channelId, messageId);
// Will hit rate limits quickly
}
// Good: Bulk delete
var bulkData = new { messages = messageIds.ToArray() };
await apiClient.BulkDeleteMessages(channelId, bulkData);
Cache frequently accessed data to avoid repeated API calls:
// Bad: Fetching guild data repeatedly
for (int i = 0; i < 100; i++)
{
var guild = await apiClient.GetGuild(guildId);
// Process guild data
}
// Good: Cache guild data
var guild = await apiClient.GetGuild(guildId);
var cachedGuild = guild; // Store in memory/cache
for (int i = 0; i < 100; i++)
{
// Use cached data
}
For operations that might generate many requests, use a queue:
using System.Collections.Concurrent;
public class MessageQueue
{
private readonly ConcurrentQueue<(ulong ChannelId, Message Message)> _queue = new();
private readonly ApiClient _apiClient;
public MessageQueue(ApiClient apiClient)
{
_apiClient = apiClient;
_ = ProcessQueueAsync(); // Start processing in background
}
public void Enqueue(ulong channelId, Message message)
{
_queue.Enqueue((channelId, message));
}
private async Task ProcessQueueAsync()
{
while (true)
{
if (_queue.TryDequeue(out var item))
{
try
{
await _apiClient.SendMessage(item.ChannelId, item.Message);
}
catch (Exception ex)
{
Console.WriteLine($"Error sending message: {ex.Message}");
}
}
else
{
await Task.Delay(100); // Wait before checking again
}
}
}
}
Even with client-side rate limiting, you should handle potential 429 errors:
using Fluxer.Net.Extensions;
async Task SendMessageSafely(ulong channelId, Message message)
{
const int maxRetries = 3;
int retryCount = 0;
while (retryCount < maxRetries)
{
try
{
await apiClient.SendMessage(channelId, message);
return; // Success
}
catch (FluxerApiException ex) when (ex.Message.Contains("429"))
{
retryCount++;
Console.WriteLine($"Rate limited. Retry {retryCount}/{maxRetries}");
// Wait before retrying (exponential backoff)
await Task.Delay(TimeSpan.FromSeconds(Math.Pow(2, retryCount)));
}
}
Console.WriteLine("Failed to send message after retries");
}
Be careful with loops that make API calls:
// Bad: Spammy loop
while (true)
{
var messages = await apiClient.GetMessages(channelId);
// Process messages
// No delay - will hit rate limits!
}
// Good: Loop with delay
while (true)
{
var messages = await apiClient.GetMessages(channelId);
// Process messages
await Task.Delay(TimeSpan.FromSeconds(1)); // Wait between iterations
}
You can monitor rate limit behavior through logging:
var logger = new LoggerConfiguration()
.MinimumLevel.Debug() // Set to Debug to see rate limit logs
.WriteTo.Console()
.CreateLogger();
var config = new FluxerConfig
{
Serilog = logger,
EnableRateLimiting = true
};
var apiClient = new ApiClient(token, config);
// Fluxer.Net will log when it waits for rate limits
Fluxer API responses include rate limit information in headers (not directly exposed in Fluxer.Net but handled internally):
X-RateLimit-Limit - Maximum requests allowedX-RateLimit-Remaining - Requests remaining in current windowX-RateLimit-Reset - Unix timestamp when limit resetsX-RateLimit-Bucket - Rate limit bucket identifier// Multiple users joining at once
gatewayClient.GuildMemberAdd += async (data) =>
{
// Safe: Rate limiting handles this automatically
await apiClient.SendMessage(welcomeChannelId, new Message
{
Content = $"Welcome, {data.Member.User.Username}!"
});
};
// Deleting old messages
var messages = await apiClient.GetMessages(channelId);
var oldMessages = messages.Where(m => m.CreatedAt < DateTime.Now.AddDays(-7));
// Good: Use bulk delete (max 100 messages at once)
var chunks = oldMessages.Chunk(100);
foreach (var chunk in chunks)
{
await apiClient.BulkDeleteMessages(channelId,
new { messages = chunk.Select(m => m.Id).ToArray() });
}
// Sending the same message to multiple channels
var channels = new[] { channelId1, channelId2, channelId3 };
foreach (var channelId in channels)
{
// Safe: Each channel has its own rate limit bucket
await apiClient.SendMessage(channelId, new Message
{
Content = "Important announcement!"
});
}
If you're experiencing rate limit issues: