The Tuteliq Node.js SDK (@tuteliq/sdk v2.5.0) provides a typed, promise-based client for the Tuteliq child safety API. It works in Node.js 18+ and includes full TypeScript definitions out of the box.
Analyze a conversation history for grooming indicators. The response includes a per-message risk breakdown showing how risk escalates across the conversation.
const result = await tuteliq.detectGrooming({ messages: [ { role: 'stranger', content: 'Hey, how old are you?' }, { role: 'child', content: "I'm 11" }, { role: 'stranger', content: 'Cool. Do you have your own phone?' }, { role: 'stranger', content: "Let's talk on a different app, just us" }, ], childAge: 11,})console.log(result.grooming_risk) // "high"console.log(result.risk_score) // 0.92console.log(result.flags) // ["isolation", "secrecy"]// Per-message risk breakdownif (result.message_analysis) { for (const msg of result.message_analysis) { console.log(`Message ${msg.message_index}: risk=${msg.risk_score}, flags=${msg.flags}`) }}// Message 1: risk=0.2, flags=["information_seeking"]// Message 2: risk=0.1, flags=[]// Message 3: risk=0.5, flags=["information_seeking"]// Message 4: risk=0.92, flags=["isolation", "secrecy_request"]
Evaluate emotional well-being from conversation text.
const result = await tuteliq.analyzeEmotions({ content: "Nobody at school talks to me anymore. I just sit alone every day.", context: { ageGroup: '13-15' },})console.log(result.dominant_emotions) // ["sadness", "loneliness"]console.log(result.emotion_scores) // { sadness: 0.87, loneliness: 0.75, ... }console.log(result.trend) // "worsening"console.log(result.recommended_followup) // "Check in about school relationships..."
These methods cover financial exploitation, romance scams, and coercive behaviour targeting minors. Other endpoints — detectAppFraud, detectMuleRecruitment, detectGamblingHarm, detectCoerciveControl, and detectRadicalisation — follow the same call pattern shown here.
Analyze text for romantic manipulation patterns that may indicate an adult posing as a peer.
const result = await tuteliq.detectRomanceScam({ content: "I've never felt this way about anyone before. You're so mature for your age. Keep us a secret.", context: { ageGroup: '13-15' },})console.log(result.detected) // trueconsole.log(result.risk_score) // 0.91console.log(result.categories) // [{ tag: "LOVE_BOMBING", label: "Love Bombing", confidence: 0.93 }]console.log(result.recommended_action) // "Immediate intervention recommended"
Detect attempts to identify and target emotional or situational vulnerabilities in a child.
const result = await tuteliq.detectVulnerabilityExploitation({ content: "I know you said your parents don't listen to you. I'm different — I actually care. You can tell me anything.", context: { ageGroup: '13-15' },})console.log(result.detected) // trueconsole.log(result.risk_score) // 0.85console.log(result.categories) // [{ tag: "EMOTIONAL_EXPLOITATION", label: "Emotional Exploitation", confidence: 0.88 }]console.log(result.recommended_action) // "Flag for moderator review"
Run multiple detection endpoints on a single piece of content in one API call.
const result = await tuteliq.analyseMulti({ content: "You're so special. Nobody else understands you like I do. Keep this a secret.", detections: ['social-engineering', 'romance-scam', 'grooming'], context: { ageGroup: '13-15' },})console.log(result.summary.highest_risk) // "critical"console.log(result.summary.total_credits_used) // 3console.log(result.results.length) // 3
analyseMulti is billed per detection endpoint, not per request.