Landing Page
Features overview and pricing
GitHub
Source code and releases
Wally
Install via Wally registry
.rbxm Download
Download the latest model file
Installation
Wally (recommended)
Add to yourwally.toml:
Rojo model file
DownloadTuteliqSafety.rbxm from the latest GitHub release and insert it into ServerScriptService in Roblox Studio.
Roblox Creator Store
Search for TuteliqSafety in the Creator Marketplace toolbox inside Roblox Studio, then insert the model intoServerScriptService.
Manual
Copy theTuteliqSafety/ folder into ServerScriptService in Roblox Studio.
Prerequisites
- Enable HTTP Requests — In Roblox Studio go to Game Settings → Security → Allow HTTP Requests and turn it on.
- Get an API key — Sign up at tuteliq.ai and create a key starting with
tq_.
Quick start
Create a Script inServerScriptService:
How it works
Every chat message goes through a three-stage pipeline:- Crisis keywords — Phrases like “kill myself” or “send me pictures” trigger an immediate API call to
/safety/unsafefor instant analysis. - Grooming buffer — Messages are accumulated per channel. When the buffer reaches 5+ messages from 2+ unique senders, a conversation analysis is sent to
/safety/grooming. - Batch queue — All other messages are batched (25 per request or every 5 seconds) and sent to
/batch/analyzefor efficient bullying and unsafe content detection.
| Severity | Actions |
|---|---|
| Low | Log to output |
| Medium | Log + notify admins via BindableEvent + webhook |
| High | Log + notify + mute player for 60s + webhook |
| Critical | Log + notify + mute + kick player (if enabled) + webhook |
Listen for detections
Connect to theTuteliqAlert BindableEvent to build custom admin UI or notifications:
Integrate muting with your chat system
Muted players have the attributeTuteliqMuted set to true. Use this to filter messages in your chat UI:
Manual analysis
Analyze a specific message on demand, bypassing the batch queue:analyzeNow does not support grooming detection directly. Grooming requires conversation context, which is handled automatically by the conversation buffer.Configuration
Sensitivity levels
| Level | Threshold | Best for |
|---|---|---|
low | 0.7 | Minimal false positives, only flags high-confidence threats |
medium | 0.5 | Balanced — good default for most experiences |
high | 0.3 | Catches more subtle threats, may increase false positives |
maximum | 0.1 | Maximum protection — flags nearly everything suspicious |
API
| Method | Description |
|---|---|
TuteliqSafety.start(config) | Start monitoring chat |
TuteliqSafety.stop() | Stop monitoring and clean up connections |
TuteliqSafety.onDetection() | Returns the TuteliqAlert BindableEvent |
TuteliqSafety.analyzeNow(text, type?, player?) | Manually analyze a message |
TuteliqSafety.isRunning() | Check if the module is active |
Architecture
Rate limiting
The module uses a token bucket rate limiter defaulting to 400 requests per minute. This is conservative and leaves room for your game’s other HTTP calls. Roblox enforces a global limit of 500 HTTP requests per minute per server. Batch processing lets you analyze up to ~10,000 messages per minute within this budget.Next steps
API Reference
Explore the full API specification.
Unity SDK
See the Unity SDK guide for C# game integration.