Skip to main content
The Kids Online Safety Act (KOSA) requires covered platforms to prevent and mitigate specific categories of harm to minors. Tuteliq provides API endpoints that map directly to each of the nine KOSA harm categories, giving you a single integration point for full compliance coverage.

Coverage Matrix

Harm CategoryTuteliq Endpoint(s)Status
Eating Disorders/safety/unsafeCovered
Substance Use/safety/unsafeCovered
Suicidal Behaviors/safety/unsafeCovered
Depression & Anxiety/safety/unsafe + /analysis/emotionsCovered
Compulsive Usage/safety/unsafeCovered
Harassment & Bullying/safety/bullyingCovered
Sexual Exploitation/safety/grooming + /safety/unsafeCovered
Voice & Audio Threats/safety/voiceCovered
Visual Content Risks/safety/imageCovered

Category Details

Eating Disorders

The /safety/unsafe endpoint detects promotion of disordered eating, pro-anorexia and pro-bulimia content, dangerous diet challenges, and body-shaming language that could trigger eating disorders in minors.

Substance Use

Content promoting or normalizing drug use, alcohol consumption by minors, vaping, and substance abuse challenges is flagged by /safety/unsafe under the substance sub-category.

Suicidal Behaviors

The /safety/unsafe endpoint identifies suicidal ideation, self-harm instructions, and content that glorifies or normalizes self-harm. Detections in this category are always elevated to critical severity to enable immediate intervention.
Alerts in the suicidal behaviors category should trigger your most urgent response workflow. Consider routing these to human moderators with mandatory response SLAs.

Depression & Anxiety

Tuteliq uses a two-endpoint approach for this category. The /safety/unsafe endpoint catches content that promotes hopelessness or emotional distress. The /analysis/emotions endpoint provides nuanced emotional tone analysis, detecting sustained patterns of anxiety, despair, or emotional manipulation in conversations.

Compulsive Usage

The /safety/unsafe endpoint detects dark patterns, engagement manipulation, and content designed to exploit compulsive usage behaviors in minors, such as artificial urgency, streak mechanics framing, and addictive loop language.

Harassment & Bullying

The dedicated /safety/bullying endpoint is purpose-built to detect direct insults, exclusionary language, coordinated harassment, intimidation, and cyberbullying patterns across text conversations.

Sexual Exploitation

This category uses two endpoints for layered detection. The /safety/grooming endpoint identifies predatory behavior patterns such as trust-building, isolation tactics, boundary testing, and secret-keeping. The /safety/unsafe endpoint catches explicit sexual content, sextortion language, and exploitation material references.

Voice & Audio Threats

The /safety/voice endpoint accepts audio files and returns transcription with safety analysis. It detects verbal harassment, threats, grooming in voice messages, and other harmful audio content. For real-time monitoring, see Voice Streaming.

Visual Content Risks

The /safety/image endpoint analyzes images for inappropriate visual content including explicit material, violent imagery, self-harm depictions, and other content that poses risks to minors.

Age-Calibrated Severity

All Tuteliq safety endpoints accept an optional age parameter (integer, 0-17). When provided, severity scores are calibrated to the developmental stage of the minor. Content that may be flagged as medium for a 16-year-old could be elevated to high or critical for an 8-year-old.
{
  "text": "let's play a secret game, just between us",
  "age": 9
}
Age calibration is applied automatically across all harm categories when the parameter is present. This helps platforms implement age-appropriate safety measures as recommended by KOSA.

Implementation Guidance

To achieve full KOSA coverage, integrate the following endpoints into your content pipeline:
1

Text Content

Route user-generated text through /safety/unsafe, /safety/bullying, and /safety/grooming. This covers eating disorders, substance use, suicidal behaviors, compulsive usage, harassment, and sexual exploitation.
2

Emotional Monitoring

For messaging and social features, use /analysis/emotions to detect sustained patterns of depression and anxiety in conversations.
3

Voice Content

Send audio messages and voice chat recordings to /safety/voice, or use the WebSocket streaming endpoint for live monitoring.
4

Image Content

Route uploaded images and profile pictures through /safety/image to cover visual content risks.
5

Alerting

Configure webhooks for safety.critical and safety.high events to ensure immediate response to the most severe incidents.