Install and use the Tuteliq MCP server for AI assistant integrations
The Tuteliq MCP server (@tuteliq/mcp v3.7.0) exposes child safety detection as tools for AI assistants that support the Model Context Protocol — including Claude Desktop, Cursor, Windsurf, and other MCP-compatible clients.
When true, returns supporting evidence excerpts with flagged phrases and weights
support_threshold
string
No
Minimum severity to include crisis helplines. Values: low, medium, high (default), critical. Critical severity always includes support resources regardless of this setting.
external_id
string
No
Your external tracking ID (echoed in response)
customer_id
string
No
Your customer identifier (echoed in response)
Tool
Description
detect_unsafe
Detect harmful content across all nine KOSA categories
detect_bullying
Detect bullying and harassment patterns
detect_grooming
Detect grooming patterns in conversations
detect_social_engineering
Detect social engineering tactics (pretexting, impersonation, urgency manipulation)
Crisis helpline threshold (low / medium / high / critical)
Valid endpoint values for analyse_multi:
Endpoint ID
Classifier
bullying
Bullying & Harassment
grooming
Grooming Detection
unsafe
Unsafe Content (KOSA categories)
social-engineering
Social Engineering
app-fraud
App-based Fraud
romance-scam
Romance Scam
mule-recruitment
Mule Recruitment
gambling-harm
Gambling Harm
coercive-control
Coercive Control
vulnerability-exploitation
Vulnerability Exploitation
radicalisation
Radicalisation
When vulnerability-exploitation is included, its cross-endpoint modifier automatically adjusts severity scores across all other results — amplifying risk when the content targets vulnerable individuals.
ISO 639-1 code (e.g., "en", "de", "sv"). Auto-detected if omitted.
platform
string
Platform name (e.g., "Discord", "Roblox", "WhatsApp"). Adjusts for platform-specific norms.
conversation_history
array
Prior messages for context-aware analysis. Returns per-message message_analysis.
sender_trust
string
"verified", "trusted", or "unknown".
sender_name
string
Sender identifier (used with sender_trust).
country
string
ISO 3166-1 alpha-2 code (e.g., "GB", "US", "SE"). Enables geo-localised crisis helpline data. Falls back to user profile country if omitted.
When sender_trust is "verified", the API fully suppresses AUTH_IMPERSONATION — a verified sender cannot be impersonating an authority by definition. Routine urgency (schedules, deadlines) is also suppressed. Only genuinely malicious content (credential theft, phishing links, financial demands) will be flagged.
Tuteliq is fully stateless — no conversation text, context, or session state is retained between requests. This is a deliberate privacy-by-design decision.When processing messages from minors under GDPR/COPPA, the safest data is data you never store. Each request arrives, is analyzed, and the content is discarded. Pass context (age_group, platform, conversation_history) with every call that needs it.
You: "Check this message for grooming. The user is 13-15 on Discord: 'Hey, you seem really mature for your age'"→ Tuteliq analyzes with age-calibrated scoring — no data retainedYou: "Now check this follow-up with the previous message as context: 'Don't tell your parents about our chats'"→ Pass conversation_history in context — Tuteliq detects escalation patterns across the full conversation
Privacy-first: No session store to breach, no conversation cache to leak, no accumulated history to subpoena. Your integration handles context; Tuteliq handles detection.
For testing without consuming credits, use a sandbox API key. Create one in your Dashboard under Settings > API Keys > Environment: Sandbox.Sandbox keys:
Run real analysis (not mocked) so you can validate integration behavior
Don’t consume credits — credits_used is always 0
Have a daily limit of 50 calls and 10 requests per minute
Return "sandbox": true in every response
Sandbox mode is for integration testing only. For production use, switch to a production API key.
If the API key is invalid or credits are exhausted, the tool will return a structured error message that the AI assistant can interpret and relay to the user.