Skip to main content
The Tuteliq Node.js SDK (@tuteliq/sdk v2.5.0) provides a typed, promise-based client for the Tuteliq child safety API. It works in Node.js 18+ and includes full TypeScript definitions out of the box.

Installation

npm install @tuteliq/sdk

Initialize the client

import { Tuteliq } from '@tuteliq/sdk'

const tuteliq = new Tuteliq('YOUR_API_KEY')
Never hardcode API keys in source code. Use environment variables or a secrets manager.
const tuteliq = new Tuteliq(process.env.TUTELIQ_API_KEY)

Detect unsafe content

Scan a single text input for harmful content across all KOSA categories.
const result = await tuteliq.detectUnsafe({
  content: "Let's meet at the park after school, don't tell your parents",
  context: { ageGroup: '10-12', country: 'GB' },
})

console.log(result.unsafe)       // true
console.log(result.severity)     // "high"
console.log(result.categories)   // ["grooming", "secrecy"]
console.log(result.risk_score)   // 0.91

Detect grooming patterns

Analyze a conversation history for grooming indicators. The response includes a per-message risk breakdown showing how risk escalates across the conversation.
const result = await tuteliq.detectGrooming({
  messages: [
    { role: 'stranger', content: 'Hey, how old are you?' },
    { role: 'child', content: "I'm 11" },
    { role: 'stranger', content: 'Cool. Do you have your own phone?' },
    { role: 'stranger', content: "Let's talk on a different app, just us" },
  ],
  childAge: 11,
})

console.log(result.grooming_risk)    // "high"
console.log(result.risk_score)       // 0.92
console.log(result.flags)            // ["isolation", "secrecy"]

// Per-message risk breakdown
if (result.message_analysis) {
  for (const msg of result.message_analysis) {
    console.log(`Message ${msg.message_index}: risk=${msg.risk_score}, flags=${msg.flags}`)
  }
}
// Message 1: risk=0.2,  flags=["information_seeking"]
// Message 2: risk=0.1,  flags=[]
// Message 3: risk=0.5,  flags=["information_seeking"]
// Message 4: risk=0.92, flags=["isolation", "secrecy_request"]

Analyze emotions

Evaluate emotional well-being from conversation text.
const result = await tuteliq.analyzeEmotions({
  content: "Nobody at school talks to me anymore. I just sit alone every day.",
  context: { ageGroup: '13-15' },
})

console.log(result.dominant_emotions)    // ["sadness", "loneliness"]
console.log(result.emotion_scores)       // { sadness: 0.87, loneliness: 0.75, ... }
console.log(result.trend)               // "worsening"
console.log(result.recommended_followup) // "Check in about school relationships..."

Analyze voice

Upload an audio file for transcription and safety analysis.
import { readFileSync } from 'fs'

const audio = readFileSync('./recording.wav')

const result = await tuteliq.analyzeVoice({
  file: audio,
  filename: 'recording.wav',
  analysisType: 'all',
  ageGroup: '13-15',
})

console.log(result.transcription.text)    // Full transcript
console.log(result.overall_severity)      // "low" | "medium" | "high" | "critical"
console.log(result.overall_risk_score)    // 0.0 - 1.0
console.log(result.analysis?.bullying)    // Bullying analysis on transcript

Fraud detection and safety extended

These methods cover financial exploitation, romance scams, and coercive behaviour targeting minors. Other endpoints — detectAppFraud, detectMuleRecruitment, detectGamblingHarm, detectCoerciveControl, and detectRadicalisation — follow the same call pattern shown here.

Detect social engineering

Identify manipulation tactics designed to trick a child into disclosing information or taking unsafe actions.
const result = await tuteliq.detectSocialEngineering({
  content: "All my real friends share their address. Don't you trust me?",
  context: { ageGroup: '10-12' },
})

console.log(result.detected)           // true
console.log(result.categories)         // [{ tag: "TRUST_EXPLOITATION", label: "Trust Exploitation", confidence: 0.92 }]
console.log(result.risk_score)         // 0.88
console.log(result.recommended_action) // "Block and report to platform administrators"

Detect romance scam

Analyze text for romantic manipulation patterns that may indicate an adult posing as a peer.
const result = await tuteliq.detectRomanceScam({
  content: "I've never felt this way about anyone before. You're so mature for your age. Keep us a secret.",
  context: { ageGroup: '13-15' },
})

console.log(result.detected)           // true
console.log(result.risk_score)         // 0.91
console.log(result.categories)         // [{ tag: "LOVE_BOMBING", label: "Love Bombing", confidence: 0.93 }]
console.log(result.recommended_action) // "Immediate intervention recommended"

Detect vulnerability exploitation

Detect attempts to identify and target emotional or situational vulnerabilities in a child.
const result = await tuteliq.detectVulnerabilityExploitation({
  content: "I know you said your parents don't listen to you. I'm different — I actually care. You can tell me anything.",
  context: { ageGroup: '13-15' },
})

console.log(result.detected)           // true
console.log(result.risk_score)         // 0.85
console.log(result.categories)         // [{ tag: "EMOTIONAL_EXPLOITATION", label: "Emotional Exploitation", confidence: 0.88 }]
console.log(result.recommended_action) // "Flag for moderator review"

Analyse multiple endpoints in one request

Run multiple detection endpoints on a single piece of content in one API call.
const result = await tuteliq.analyseMulti({
  content: "You're so special. Nobody else understands you like I do. Keep this a secret.",
  detections: ['social-engineering', 'romance-scam', 'grooming'],
  context: { ageGroup: '13-15' },
})

console.log(result.summary.highest_risk)       // "critical"
console.log(result.summary.total_credits_used)  // 3
console.log(result.results.length)              // 3
analyseMulti is billed per detection endpoint, not per request.

Error handling

The SDK throws typed errors that you can catch and inspect.
import { Tuteliq, TuteliqError } from '@tuteliq/sdk'

try {
  const result = await tuteliq.detectUnsafe({
    content: 'some content',
    context: { ageGroup: '10-12' },
  })
} catch (error) {
  if (error instanceof TuteliqError) {
    console.error(error.code)    // e.g. "AUTH_INVALID_KEY"
    console.error(error.message) // human-readable description
    console.error(error.status)  // HTTP status code
  }
}

TypeScript support

The SDK ships with complete TypeScript definitions. No additional @types package is needed.
All request and response types are exported for direct use:
import { Tuteliq } from '@tuteliq/sdk'
import type {
  DetectUnsafeInput,
  UnsafeResult,
  DetectBullyingInput,
  BullyingResult,
  DetectGroomingInput,
  GroomingResult,
  EmotionsResult,
  DetectionInput,
  DetectionResult,
} from '@tuteliq/sdk'

Configuration options

const tuteliq = new Tuteliq(process.env.TUTELIQ_API_KEY, {
  timeout: 30_000,   // request timeout in ms (default: 30000)
  retries: 3,        // automatic retries on transient failures (default: 3)
  retryDelay: 1000,  // initial retry delay in ms (default: 1000)
})

Next steps

API Reference

Explore the full API specification.

Python SDK

See the Python SDK guide.