Gaming Chat Moderation
Gaming moderation is fundamentally different from moderating a blog or forum. Players use aggressive banter that’s perfectly normal in context, spam RMT offers using creative obfuscation, and communicate in rapid-fire short messages. Sieve’s Gaming pipeline mode is built for this.
Why Gaming Is Different
Section titled “Why Gaming Is Different”“I’m gonna kill you” is normal PvP trash talk. “I’m gonna find where you live” is a real threat. Context matters more than keywords.
Gold sellers, boosting services, and account traders use heavy obfuscation: “g0ld 4 s4le” and Unicode substitution to evade filters.
Players deliberately obfuscate slurs and banned terms with character substitution, spacing tricks, and Unicode lookalikes.
Chat messages need sub-second moderation. Players notice delays, and slow moderation breaks the flow of gameplay.
Recommended Setup
Section titled “Recommended Setup”For most games, use Gaming mode with chat context:
curl -X POST https://api.getsieve.dev/v1/moderate/text \ -H "Authorization: Bearer mod_live_your_key" \ -H "Content-Type: application/json" \ -d '{ "text": "player message here", "context": "chat", "username": "PlayerOne" }'Code Examples
Section titled “Code Examples”const response = await fetch('https://api.getsieve.dev/v1/moderate/text', { method: 'POST', headers: { 'Authorization': 'Bearer mod_live_your_key', 'Content-Type': 'application/json', }, body: JSON.stringify({ text: message, context: 'chat', username: player.username, }),});
const result = await response.json();
if (result.action === 'block') { // Don't show the message, warn the player player.warn('Your message was blocked for: ' + result.primary_category);} else if (result.action === 'flag') { // Show the message but log it for review chat.send(message); moderationQueue.add(result);} else { chat.send(message);}import requests
response = requests.post( "https://api.getsieve.dev/v1/moderate/text", headers={ "Authorization": "Bearer mod_live_your_key", "Content-Type": "application/json", }, json={ "text": message, "context": "chat", "username": player.username, },)
result = response.json()
if result["action"] == "block": player.warn(f"Message blocked: {result['primary_category']}")elif result["action"] == "flag": chat.send(message) moderation_queue.add(result)else: chat.send(message)using var client = new HttpClient();client.DefaultRequestHeaders.Add("Authorization", "Bearer mod_live_your_key");
var payload = new { text = message, context = "chat", username = player.Username};
var response = await client.PostAsJsonAsync( "https://api.getsieve.dev/v1/moderate/text", payload);var result = await response.Content.ReadFromJsonAsync<ModerationResult>();
switch (result.Action) { case "block": player.Warn($"Message blocked: {result.PrimaryCategory}"); break; case "flag": Chat.Send(message); ModerationQueue.Add(result); break; default: Chat.Send(message); break;}var http = HTTPRequest.new()add_child(http)
var headers = [ "Authorization: Bearer mod_live_your_key", "Content-Type: application/json"]
var body = JSON.stringify({ "text": message, "context": "chat", "username": player.username})
http.request("https://api.getsieve.dev/v1/moderate/text", headers, HTTPClient.METHOD_POST, body)
# In the response callbackfunc _on_moderation_complete(result, response_code, headers, body): var json = JSON.parse_string(body.get_string_from_utf8()) match json["action"]: "block": player.show_warning("Message blocked: " + json["primary_category"]) "flag": chat.send_message(message) "allow": chat.send_message(message)Handling the Response
Section titled “Handling the Response”Every moderation response includes an action field with one of three values:
| Action | What It Means | Recommended Handling |
|---|---|---|
allow | Content is clean | Display the message normally |
flag | Content is borderline | Display but log for human review |
block | Content violates rules | Suppress the message, warn the user |
The response also includes per-category scores so you can implement custom logic:
{ "action": "flag", "primary_category": "toxicity", "scores": { "toxicity": 0.72, "harassment": 0.45, "hate_speech": 0.12, "sexual": 0.01, "violence": 0.08, "self_harm": 0.00, "spam": 0.15 }, "tier": 1, "latency_ms": 145}Custom Rules for Your Game
Section titled “Custom Rules for Your Game”Every game has specific slang, ability names, and community terms that need special handling. Use custom rules to fine-tune moderation for your game.
Common custom rules for games:
# Allowlist game-specific terms that trigger false positivescurl -X POST https://api.getsieve.dev/v1/config/rules \ -H "Authorization: Bearer mod_live_your_key" \ -H "Content-Type: application/json" \ -d '{ "type": "allowlist", "words": ["assassinate", "execute", "headshot", "killstreak", "nuke"], "priority": 100 }'
# Block game-specific RMT spamcurl -X POST https://api.getsieve.dev/v1/config/rules \ -H "Authorization: Bearer mod_live_your_key" \ -H "Content-Type: application/json" \ -d '{ "type": "wordlist", "action": "block", "category": "spam", "words": ["buy gold", "cheap currency", "account boost", "power level service"], "priority": 10 }'Latency Considerations
Section titled “Latency Considerations”For real-time game chat, latency matters. Here’s what to expect:
| Scenario | Typical Latency |
|---|---|
| Tier 0 catches it (60-70% of requests) | < 5ms |
| Escalates to Gaming AI tier | 200-400ms |
| Blended average | ~80ms |
Architecture tips for low-latency integration:
- Fire and forget: Send the moderation request async. Display the message immediately and retroactively hide it if blocked. This is the pattern most large games use.
- Pre-send check: Moderate before displaying. Adds latency but prevents any blocked content from ever appearing. Better for younger audiences.
- Hybrid: Use fire-and-forget for
chatcontext and pre-send forusernamecontext.