Skip to main content
The Tuteliq MCP server (@tuteliq/mcp v3.7.0) exposes child safety detection as tools for AI assistants that support the Model Context Protocol — including Claude Desktop, Cursor, Windsurf, and other MCP-compatible clients.

Setup

Claude Desktop connects to Tuteliq via Streamable HTTP — no npm install required.
1

Generate a Secure Token

Go to your Tuteliq Dashboard, navigate to Settings > Plugins, and generate a Secure Token. This token authenticates the MCP connection.
2

Add the connector in Claude Desktop

  1. Open Claude Desktop and go to Settings > Connectors
  2. Click Add custom connector
  3. Set the name to Tuteliq and the URL to:
    https://api.tuteliq.ai/mcp
    
3

Connect and authenticate

Click Connect. When prompted, enter the Secure Token you generated in Step 1.That’s it — Tuteliq tools will be available in your next conversation.

Cursor

Add to your Cursor MCP settings:
{
  "mcpServers": {
    "tuteliq": {
      "url": "https://api.tuteliq.ai/mcp",
      "headers": {
        "Authorization": "Bearer your-api-key"
      }
    }
  }
}

Claude Code

Add to your project’s .mcp.json:
{
  "mcpServers": {
    "tuteliq": {
      "url": "https://api.tuteliq.ai/mcp",
      "headers": {
        "Authorization": "Bearer your-api-key"
      }
    }
  }
}

Other MCP clients (npx / stdio)

For clients that only support stdio transport:
npm install -g @tuteliq/mcp
{
  "mcpServers": {
    "tuteliq": {
      "command": "tuteliq-mcp",
      "env": {
        "TUTELIQ_API_KEY": "your_api_key"
      }
    }
  }
}

Available tools

Once configured, the following tools are available to the AI assistant:

Detection tools

All detection tools accept a common set of parameters:
ParameterTypeRequiredDescription
contentstringYesText content to analyze
contextobjectNoAnalysis context — see Context fields below
include_evidencebooleanNoWhen true, returns supporting evidence excerpts with flagged phrases and weights
support_thresholdstringNoMinimum severity to include crisis helplines. Values: low, medium, high (default), critical. Critical severity always includes support resources regardless of this setting.
external_idstringNoYour external tracking ID (echoed in response)
customer_idstringNoYour customer identifier (echoed in response)
ToolDescription
detect_unsafeDetect harmful content across all nine KOSA categories
detect_bullyingDetect bullying and harassment patterns
detect_groomingDetect grooming patterns in conversations
detect_social_engineeringDetect social engineering tactics (pretexting, impersonation, urgency manipulation)
detect_app_fraudDetect app-based fraud (fake investments, phishing, subscription traps)
detect_romance_scamDetect romance scam patterns (love-bombing, financial requests, identity deception)
detect_mule_recruitmentDetect money mule recruitment (easy-money offers, account sharing)
detect_gambling_harmDetect gambling harm indicators (chasing losses, concealment, underage gambling)
detect_coercive_controlDetect coercive control patterns (isolation, financial control, surveillance)
detect_vulnerability_exploitationDetect exploitation targeting vulnerable individuals
detect_radicalisationDetect radicalisation indicators (extremist rhetoric, recruitment patterns)

Multi-endpoint analysis

analyse_multi runs up to 10 detections on a single text in one call.
ParameterTypeRequiredDescription
contentstringYesText content to analyze
endpointsstring[]YesList of endpoint IDs to run (see table below)
contextobjectNoAnalysis context — see Context fields
include_evidencebooleanNoInclude supporting evidence in each result
support_thresholdstringNoCrisis helpline threshold (low / medium / high / critical)
Valid endpoint values for analyse_multi:
Endpoint IDClassifier
bullyingBullying & Harassment
groomingGrooming Detection
unsafeUnsafe Content (KOSA categories)
social-engineeringSocial Engineering
app-fraudApp-based Fraud
romance-scamRomance Scam
mule-recruitmentMule Recruitment
gambling-harmGambling Harm
coercive-controlCoercive Control
vulnerability-exploitationVulnerability Exploitation
radicalisationRadicalisation
When vulnerability-exploitation is included, its cross-endpoint modifier automatically adjusts severity scores across all other results — amplifying risk when the content targets vulnerable individuals.

Media analysis tools

ToolParametersDescription
analyze_voicefile_path, age_groupTranscribe and analyze audio files for safety
analyze_imagefile_path, age_groupAnalyze image files for visual content risks
analyze_videofile_path, age_groupAnalyze video files with per-frame safety findings

Guidance & reporting tools

ToolParametersDescription
analyze_emotionscontent, contextAnalyze emotional well-being from text
get_action_plandetection_result, audienceGenerate age-appropriate guidance (child, parent, professional)
generate_reportmessages, childAge, incidentTypeCreate structured incident reports for law enforcement or safeguarding teams

Context fields

Pass a context object with any detection tool to improve accuracy:
FieldTypeEffect
ageGroup / age_groupstringTriggers age-calibrated scoring. Values: "under 10", "10-12", "13-15", "16-17", "under 18"
languagestringISO 639-1 code (e.g., "en", "de", "sv"). Auto-detected if omitted.
platformstringPlatform name (e.g., "Discord", "Roblox", "WhatsApp"). Adjusts for platform-specific norms.
conversation_historyarrayPrior messages for context-aware analysis. Returns per-message message_analysis.
sender_truststring"verified", "trusted", or "unknown".
sender_namestringSender identifier (used with sender_trust).
countrystringISO 3166-1 alpha-2 code (e.g., "GB", "US", "SE"). Enables geo-localised crisis helpline data. Falls back to user profile country if omitted.
When sender_trust is "verified", the API fully suppresses AUTH_IMPERSONATION — a verified sender cannot be impersonating an authority by definition. Routine urgency (schedules, deadlines) is also suppressed. Only genuinely malicious content (credential theft, phishing links, financial demands) will be flagged.

Example usage

Once the MCP server is running, you can ask your AI assistant to use Tuteliq tools directly in conversation:
“Check this message for safety: ‘Let’s meet at the park after school, don’t tell your parents’ — the user is 10-12 years old”
The assistant will call detect_unsafe and return the full safety analysis including severity, categories, risk score, and rationale.
“Analyze this conversation for grooming patterns” (with a conversation pasted or in a file)
The assistant will call detect_grooming and provide a detailed breakdown of any detected grooming stages.
“Check this message for social engineering: ‘If you really trusted me you’d send your address’”
The assistant will call detect_social_engineering and return whether manipulation tactics were detected.
“Run grooming, romance scam, and social engineering detection on this message”
The assistant will call analyse_multi with all three endpoints and return combined results with an overall risk level.
“Analyze this video for safety concerns” (with a file path)
The assistant will call analyze_video and return frame-by-frame safety findings.

Resources

The MCP server also exposes resources for context:
ResourceDescription
tuteliq://kosa-categoriesList of all nine KOSA harm categories
tuteliq://age-groupsAvailable age group brackets and their calibration
tuteliq://credit-costsPer-endpoint credit costs

Stateless by Design

Tuteliq is fully stateless — no conversation text, context, or session state is retained between requests. This is a deliberate privacy-by-design decision. When processing messages from minors under GDPR/COPPA, the safest data is data you never store. Each request arrives, is analyzed, and the content is discarded. Pass context (age_group, platform, conversation_history) with every call that needs it.
You: "Check this message for grooming. The user is 13-15 on Discord: 'Hey, you seem really mature for your age'"
→ Tuteliq analyzes with age-calibrated scoring — no data retained

You: "Now check this follow-up with the previous message as context: 'Don't tell your parents about our chats'"
→ Pass conversation_history in context — Tuteliq detects escalation patterns across the full conversation
Privacy-first: No session store to breach, no conversation cache to leak, no accumulated history to subpoena. Your integration handles context; Tuteliq handles detection.

Sandbox mode

For testing without consuming credits, use a sandbox API key. Create one in your Dashboard under Settings > API Keys > Environment: Sandbox. Sandbox keys:
  • Run real analysis (not mocked) so you can validate integration behavior
  • Don’t consume creditscredits_used is always 0
  • Have a daily limit of 50 calls and 10 requests per minute
  • Return "sandbox": true in every response
Sandbox mode is for integration testing only. For production use, switch to a production API key.

Error handling

If the API key is invalid or credits are exhausted, the tool will return a structured error message that the AI assistant can interpret and relay to the user.

Configuration options

Environment variables:
VariableDescriptionDefault
TUTELIQ_API_KEYYour Tuteliq API keyRequired
TUTELIQ_BASE_URLAPI base URLhttps://api.tuteliq.ai
TUTELIQ_TIMEOUTRequest timeout in ms30000

Next steps

API Reference

Explore the full API specification.

CLI

See the CLI guide.