Coverage Matrix
| Harm Category | Tuteliq Endpoint(s) | Status |
|---|---|---|
| Eating Disorders | /safety/unsafe | Covered |
| Substance Use | /safety/unsafe | Covered |
| Suicidal Behaviors | /safety/unsafe | Covered |
| Depression & Anxiety | /safety/unsafe + /analysis/emotions | Covered |
| Compulsive Usage | /safety/unsafe | Covered |
| Harassment & Bullying | /safety/bullying + /safety/coercive-control | Covered |
| Sexual Exploitation | /safety/grooming + /safety/unsafe | Covered |
| Voice & Audio Threats | /safety/voice | Covered |
| Visual Content Risks | /safety/image | Covered |
Extended Coverage (Beyond KOSA)
Tuteliq also provides detection for harm categories beyond the core nine KOSA requirements:| Category | Tuteliq Endpoint(s) | Notes |
|---|---|---|
| Social Engineering | /fraud/social-engineering | Pretexting, impersonation, urgency manipulation |
| App & Download Fraud | /fraud/app-detection | Fake apps, malicious links, clone distribution |
| Romance Scams | /fraud/romance-scam | Love-bombing, financial exploitation |
| Money Mule Recruitment | /fraud/mule-recruitment | Laundering recruitment targeting minors |
| Gambling Harm | /safety/gambling-harm | Underage gambling, addiction patterns |
| Coercive Control | /safety/coercive-control | Isolation, financial control, surveillance |
| Vulnerability Exploitation | /safety/vulnerability-exploitation | Cross-endpoint vulnerability profiling |
| Radicalisation | /safety/radicalisation | Extremist rhetoric, recruitment patterns |
| Age Verification | /verification/age | Document-based age assurance (Beta — Pro tier) |
| Identity Verification | /verification/identity | Document + liveness identity confirmation (Beta — Business tier) |
Category Details
Eating Disorders
The/safety/unsafe endpoint detects promotion of disordered eating, pro-anorexia and pro-bulimia content, dangerous diet challenges, and body-shaming language that could trigger eating disorders in minors.
Substance Use
Content promoting or normalizing drug use, alcohol consumption by minors, vaping, and substance abuse challenges is flagged by/safety/unsafe under the substance sub-category.
Suicidal Behaviors
The/safety/unsafe endpoint identifies suicidal ideation, self-harm instructions, and content that glorifies or normalizes self-harm. Detections in this category are always elevated to critical severity to enable immediate intervention.
Depression & Anxiety
Tuteliq uses a two-endpoint approach for this category. The/safety/unsafe endpoint catches content that promotes hopelessness or emotional distress. The /analysis/emotions endpoint provides nuanced emotional tone analysis, detecting sustained patterns of anxiety, despair, or emotional manipulation in conversations.
Compulsive Usage
The/safety/unsafe endpoint detects dark patterns, engagement manipulation, and content designed to exploit compulsive usage behaviors in minors, such as artificial urgency, streak mechanics framing, and addictive loop language.
Harassment & Bullying
The dedicated/safety/bullying endpoint is purpose-built to detect direct insults, exclusionary language, coordinated harassment, intimidation, and cyberbullying patterns across text conversations.
Sexual Exploitation
This category uses two endpoints for layered detection. The/safety/grooming endpoint identifies predatory behavior patterns such as trust-building, isolation tactics, boundary testing, and secret-keeping. It also returns a per-message risk breakdown (message_analysis) showing how tactics escalate across the conversation — useful for incident timelines and audit trails. The /safety/unsafe endpoint catches explicit sexual content, sextortion language, and exploitation material references.
Voice & Audio Threats
The/safety/voice endpoint accepts audio files and returns transcription with safety analysis. It detects verbal harassment, threats, grooming in voice messages, and other harmful audio content. For real-time monitoring, see Voice Streaming.
Visual Content Risks
The/safety/image endpoint analyzes images for inappropriate visual content including explicit material, violent imagery, self-harm depictions, and other content that poses risks to minors.
Age-Calibrated Severity
All Tuteliq safety endpoints accept an optional
age_group parameter within the context object. When provided, severity scores are calibrated to the developmental stage of the minor. Content that may be flagged as medium for a 16-year-old could be elevated to high or critical for an 8-year-old.Multilingual KOSA Coverage
All KOSA harm categories are detected across 27 languages: English (stable), plus Spanish, Portuguese, Ukrainian, Swedish, Norwegian, Danish, Finnish, German, French, Dutch, Polish, Italian, Turkish, Romanian, Greek, Czech, Hungarian, Bulgarian, Croatian, Slovak, Lithuanian, Latvian, Estonian, Slovenian, Maltese, and Irish (beta). Language is auto-detected — no explicit parameter is required. Each language includes culture-specific analysis guidelines for local slang, idioms, harmful terms, grooming indicators, self-harm coded vocabulary, and filter evasion techniques, ensuring that detection quality is maintained across linguistic contexts.Implementation Guidance
To achieve full KOSA coverage, integrate the following endpoints into your content pipeline:Text Content
Route user-generated text through
/safety/unsafe, /safety/bullying, and /safety/grooming. This covers eating disorders, substance use, suicidal behaviors, compulsive usage, harassment, and sexual exploitation. For broader coverage, use /analyse/multi to fan-out to multiple detection endpoints in a single call.Fraud & Extended Safety
For platforms handling financial interactions or vulnerable users, add
/fraud/social-engineering, /fraud/romance-scam, /fraud/mule-recruitment, /safety/gambling-harm, /safety/coercive-control, and /safety/radicalisation to your pipeline. Include /safety/vulnerability-exploitation to get a cross-endpoint vulnerability modifier that adjusts severity across all results.Emotional Monitoring
For messaging and social features, use
/analysis/emotions to detect sustained patterns of depression and anxiety in conversations.Voice Content
Send audio messages and voice chat recordings to
/safety/voice, or use the WebSocket streaming endpoint for live monitoring.Image Content
Route uploaded images and profile pictures through
/safety/image to cover visual content risks.Alerting
Configure webhooks for
safety.critical and safety.high events to ensure immediate response to the most severe incidents.