Base URL
/api/v1. For example, the bullying detection endpoint is at:
Authentication
Include your API key in every request using one of these methods:Response Format
Successful response
All detection endpoints return a consistent response shape:Error response
Context Fields
Pass acontext object with any detection request to improve accuracy:
| Field | Type | Description |
|---|---|---|
age_group | string | "under 10", "10-12", "13-15", "16-17", or "under 18". Triggers age-calibrated severity scoring. |
language | string | ISO 639-1 code (e.g., "en", "de", "sv"). Auto-detected if omitted. 27 languages supported. |
platform | string | Platform name (e.g., "Discord", "Roblox", "WhatsApp"). Adjusts for platform-specific norms. |
conversation_history | array | Prior messages for context-aware analysis. Each message needs role and text. |
sender_trust | string | "verified", "trusted", or "unknown". Verified senders suppress authority impersonation false positives. |
sender_name | string | Sender identifier, used alongside sender_trust for impersonation scoring. |
country | string | ISO 3166-1 alpha-2 country code (e.g., "GB", "US", "SE"). Enables geo-localised crisis helpline data. Falls back to user profile country if omitted. |
Options
| Field | Type | Default | Description |
|---|---|---|---|
include_evidence | boolean | true | Include evidence excerpts with flagged phrases and weights |
support_threshold | string | "high" | Minimum severity to include crisis helplines. Values: low, medium, high, critical. Critical severity always includes support resources. |
Stateless by Design
Tuteliq is fully stateless — every API call is independent, and no conversation text, context, or session state is retained between requests. This is a deliberate privacy-by-design decision, not a missing feature. Why stateless?- GDPR compliance — Processing children’s data under GDPR/COPPA demands the strictest data minimization. By retaining no cross-request state, there is zero risk of sensitive conversation data persisting in caches, logs, or backups.
- No data retention surface — There is no session store to breach, no conversation cache to leak, and no accumulated history to subpoena. Each request arrives, is analyzed, and the content is discarded.
- Simpler compliance audits — “We store nothing between requests” is the easiest privacy posture to audit and certify.
- Pass
context(age_group, platform, language, conversation_history) with every request that needs it. - Use
external_idandcustomer_idto correlate results with your own systems — these are echoed back but not stored. - If you need to track risk escalation across a conversation, aggregate results on your side using the
severity,risk_score, andlevelfields returned by each call.
This is intentional. Many child safety APIs offer session-based context accumulation. We chose not to — because when you’re processing messages from minors, the safest data is data you never store. Your integration handles context; Tuteliq handles detection.
Sandbox Mode
API keys withenvironment: "sandbox" run real analysis without consuming credits. Sandbox responses include "sandbox": true and the X-Sandbox-Mode: true header.
Create a sandbox key in your Dashboard under Settings > API Keys > Environment: Sandbox.
Sandbox limits:
- 10 requests per minute rate limit
- 50 calls per day (resets at midnight UTC)
- Real analysis, real results — not mocked
Rate Limits
Rate limits are enforced per API key per minute, based on your plan:| Tier | Requests/min | Monthly calls | Credits/mo |
|---|---|---|---|
| Starter (Free) | 60 | 1,000 | 1,000 |
| Indie ($29/mo) | 300 | 10,000 | 10,000 |
| Pro ($99/mo) | 1,000 | 50,000 | 50,000 |
| Business ($349/mo) | 5,000 | 200,000 | 200,000 |
| Enterprise | 10,000 | Custom | Custom |
| Sandbox | 10 | 50/day | No credits consumed |
429 Too Many Requests with a Retry-After header.
Credits
Each endpoint consumes a different number of credits:| Endpoint | Credits | Notes |
|---|---|---|
| Text detection (bullying, grooming, unsafe, etc.) | 1 | Per call |
| Grooming / Emotions (with history) | 1 per 10 messages | ceil(messages / 10), minimum 1 |
| Action plan | 2 | Longer generation |
| Incident report | 3 | Structured output |
| Image analysis | 3 | Vision + OCR + safety |
| Voice analysis | 5 | Transcription + safety |
| Age verification | 5 | Document/biometric |
| Document analysis | Dynamic | max(3, pages × endpoints) — details |
| Video analysis | 10 | Frame extraction + per-frame analysis |
| Identity verification | 10 | Document auth + face match + liveness |
credits_used and credit balance headers:
Common Error Codes
| Code | HTTP | Description |
|---|---|---|
AUTH_REQUIRED | 401 | No API key provided |
AUTH_INVALID_KEY | 401 | API key is invalid or unrecognized |
AUTH_EXPIRED_KEY | 401 | API key has expired |
AUTH_INACTIVE_KEY | 401 | API key is inactive |
RATE_LIMIT_EXCEEDED | 429 | Rate limit exceeded for your tier |
MESSAGE_LIMIT_REACHED | 429 | Monthly credit limit reached |
TIER_ACCESS_DENIED | 403 | Endpoint not available on your tier |
VAL_INVALID_INPUT | 400 | Request body or parameters failed validation |
SVC_INTERNAL_ERROR | 500 | Unexpected internal error (safe to retry) |
Endpoint Groups
Endpoint pages are auto-generated from the OpenAPI specification and appear in the sidebar. Each page includes request/response schemas, parameter descriptions, and an interactive playground.
Safety
Bullying, grooming, unsafe content, voice, image, and video analysis. Covers all 9 KOSA harm categories.
Fraud Detection
Social engineering, app fraud, romance scams, and money mule recruitment targeting minors.
Safety Extended
Gambling harm, coercive control, vulnerability exploitation, and radicalisation detection.
Document Analysis
Upload PDFs for per-page multi-endpoint safety detection with chain-of-custody hashing.
Multi-Endpoint
Fan-out a single text to up to 10 detectors in parallel with aggregated results.
Analysis
Emotional trend analysis — dominant emotions, sentiment trajectory, depression/anxiety indicators.
Guidance
Age-appropriate action plans for children, parents, or professionals.
Reports
Professional incident reports for schools, counselors, and moderators.
Verification
Age verification (document + biometric) and identity verification (face match + liveness).
Batch
Analyze up to 50 items in a single request with parallel processing.
Webhooks
HMAC-signed webhook alerts for critical incidents, with retry and secret rotation.
Usage
Credit balance, daily summaries, per-tool breakdowns, and billing period usage.
Compliance
GDPR data subject rights — erasure, portability, rectification, consent, audit trail.
Health
Liveness probes, readiness checks, and component-level status.