Skip to main content
The Tuteliq API provides AI-powered child safety detection across text, voice, images, and video. This page covers everything you need to know before making your first call.

Base URL

https://api.tuteliq.ai/api/v1
All endpoints are prefixed with /api/v1. For example, the bullying detection endpoint is at:
POST https://api.tuteliq.ai/api/v1/safety/bullying

Authentication

Include your API key in every request using one of these methods:
curl -X POST https://api.tuteliq.ai/api/v1/safety/bullying \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"text": "message to analyze"}'
Get your API key from the Tuteliq Dashboard.

Response Format

Successful response

All detection endpoints return a consistent response shape:
{
  "endpoint": "bullying",
  "detected": true,
  "severity": 0.78,
  "confidence": 0.92,
  "risk_score": 0.85,
  "level": "high",
  "categories": [
    { "tag": "DIRECT_INSULT", "label": "Direct Insult", "confidence": 0.95 },
    { "tag": "EXCLUSION", "label": "Social Exclusion", "confidence": 0.72 }
  ],
  "evidence": [
    { "text": "nobody wants you here", "tactic": "EXCLUSION", "weight": 0.88 }
  ],
  "age_calibration": {
    "applied": true,
    "age_group": "10-12",
    "multiplier": 1.3
  },
  "recommended_action": "flag_for_review",
  "rationale": "Direct insults combined with social exclusion targeting a 10-12 year old.",
  "credits_used": 1,
  "processing_time_ms": 387
}

Error response

{
  "error": {
    "code": "RATE_LIMIT_EXCEEDED",
    "message": "You have exceeded your rate limit of 300 requests per minute.",
    "request_id": "req_abc123",
    "suggestion": "Upgrade to Pro for 1,000 requests per minute.",
    "links": {
      "upgrade": "https://tuteliq.ai/dashboard/billing",
      "docs": "https://docs.tuteliq.ai/error-handling"
    }
  }
}

Context Fields

Pass a context object with any detection request to improve accuracy:
{
  "text": "message to analyze",
  "context": {
    "age_group": "13-15",
    "language": "en",
    "country": "GB",
    "platform": "Discord",
    "conversation_history": [
      { "role": "user", "text": "previous message" },
      { "role": "contact", "text": "response message" }
    ]
  }
}
FieldTypeDescription
age_groupstring"under 10", "10-12", "13-15", "16-17", or "under 18". Triggers age-calibrated severity scoring.
languagestringISO 639-1 code (e.g., "en", "de", "sv"). Auto-detected if omitted. 27 languages supported.
platformstringPlatform name (e.g., "Discord", "Roblox", "WhatsApp"). Adjusts for platform-specific norms.
conversation_historyarrayPrior messages for context-aware analysis. Each message needs role and text.
sender_truststring"verified", "trusted", or "unknown". Verified senders suppress authority impersonation false positives.
sender_namestringSender identifier, used alongside sender_trust for impersonation scoring.
countrystringISO 3166-1 alpha-2 country code (e.g., "GB", "US", "SE"). Enables geo-localised crisis helpline data. Falls back to user profile country if omitted.

Options

FieldTypeDefaultDescription
include_evidencebooleantrueInclude evidence excerpts with flagged phrases and weights
support_thresholdstring"high"Minimum severity to include crisis helplines. Values: low, medium, high, critical. Critical severity always includes support resources.

Stateless by Design

Tuteliq is fully stateless — every API call is independent, and no conversation text, context, or session state is retained between requests. This is a deliberate privacy-by-design decision, not a missing feature. Why stateless?
  • GDPR compliance — Processing children’s data under GDPR/COPPA demands the strictest data minimization. By retaining no cross-request state, there is zero risk of sensitive conversation data persisting in caches, logs, or backups.
  • No data retention surface — There is no session store to breach, no conversation cache to leak, and no accumulated history to subpoena. Each request arrives, is analyzed, and the content is discarded.
  • Simpler compliance audits — “We store nothing between requests” is the easiest privacy posture to audit and certify.
What this means for developers:
  • Pass context (age_group, platform, language, conversation_history) with every request that needs it.
  • Use external_id and customer_id to correlate results with your own systems — these are echoed back but not stored.
  • If you need to track risk escalation across a conversation, aggregate results on your side using the severity, risk_score, and level fields returned by each call.
This is intentional. Many child safety APIs offer session-based context accumulation. We chose not to — because when you’re processing messages from minors, the safest data is data you never store. Your integration handles context; Tuteliq handles detection.

Sandbox Mode

API keys with environment: "sandbox" run real analysis without consuming credits. Sandbox responses include "sandbox": true and the X-Sandbox-Mode: true header. Create a sandbox key in your Dashboard under Settings > API Keys > Environment: Sandbox. Sandbox limits:
  • 10 requests per minute rate limit
  • 50 calls per day (resets at midnight UTC)
  • Real analysis, real results — not mocked
Sandbox mode is for integration testing only. Daily limits prevent use as a free production workaround.

Rate Limits

Rate limits are enforced per API key per minute, based on your plan:
TierRequests/minMonthly callsCredits/mo
Starter (Free)601,0001,000
Indie ($29/mo)30010,00010,000
Pro ($99/mo)1,00050,00050,000
Business ($349/mo)5,000200,000200,000
Enterprise10,000CustomCustom
Sandbox1050/dayNo credits consumed
Every response includes rate limit headers:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 994
X-RateLimit-Reset: 1710000060
When exceeded, the API returns 429 Too Many Requests with a Retry-After header.

Credits

Each endpoint consumes a different number of credits:
EndpointCreditsNotes
Text detection (bullying, grooming, unsafe, etc.)1Per call
Grooming / Emotions (with history)1 per 10 messagesceil(messages / 10), minimum 1
Action plan2Longer generation
Incident report3Structured output
Image analysis3Vision + OCR + safety
Voice analysis5Transcription + safety
Age verification5Document/biometric
Document analysisDynamicmax(3, pages × endpoints)details
Video analysis10Frame extraction + per-frame analysis
Identity verification10Document auth + face match + liveness
Every response includes credits_used and credit balance headers:
X-Credits-Remaining: 49,234
X-Monthly-Used: 766
X-Monthly-Limit: 50,000

Common Error Codes

CodeHTTPDescription
AUTH_REQUIRED401No API key provided
AUTH_INVALID_KEY401API key is invalid or unrecognized
AUTH_EXPIRED_KEY401API key has expired
AUTH_INACTIVE_KEY401API key is inactive
RATE_LIMIT_EXCEEDED429Rate limit exceeded for your tier
MESSAGE_LIMIT_REACHED429Monthly credit limit reached
TIER_ACCESS_DENIED403Endpoint not available on your tier
VAL_INVALID_INPUT400Request body or parameters failed validation
SVC_INTERNAL_ERROR500Unexpected internal error (safe to retry)
For the full error reference and retry strategies, see Error Handling.

Endpoint Groups

Endpoint pages are auto-generated from the OpenAPI specification and appear in the sidebar. Each page includes request/response schemas, parameter descriptions, and an interactive playground.

Safety

Bullying, grooming, unsafe content, voice, image, and video analysis. Covers all 9 KOSA harm categories.

Fraud Detection

Social engineering, app fraud, romance scams, and money mule recruitment targeting minors.

Safety Extended

Gambling harm, coercive control, vulnerability exploitation, and radicalisation detection.

Document Analysis

Upload PDFs for per-page multi-endpoint safety detection with chain-of-custody hashing.

Multi-Endpoint

Fan-out a single text to up to 10 detectors in parallel with aggregated results.

Analysis

Emotional trend analysis — dominant emotions, sentiment trajectory, depression/anxiety indicators.

Guidance

Age-appropriate action plans for children, parents, or professionals.

Reports

Professional incident reports for schools, counselors, and moderators.

Verification

Age verification (document + biometric) and identity verification (face match + liveness).

Batch

Analyze up to 50 items in a single request with parallel processing.

Webhooks

HMAC-signed webhook alerts for critical incidents, with retry and secret rotation.

Usage

Credit balance, daily summaries, per-tool breakdowns, and billing period usage.

Compliance

GDPR data subject rights — erasure, portability, rectification, consent, audit trail.

Health

Liveness probes, readiness checks, and component-level status.