The Tuteliq React Native SDK provides a typed client for the Tuteliq child safety API in React Native applications. It supports React Native 0.70+ with full TypeScript definitions.
Analyze a conversation history for grooming indicators.
const result = await tuteliq.detectGrooming({ messages: [ { role: 'stranger', content: 'Hey, how old are you?' }, { role: 'child', content: "I'm 11" }, { role: 'stranger', content: 'Cool. Do you have your own phone?' }, { role: 'stranger', content: "Let's talk on a different app, just us" }, ], childAge: 11,});console.log(result.grooming_risk); // "high"console.log(result.risk_score); // 0.92console.log(result.flags); // ["isolation", "secrecy"]
Evaluate emotional well-being from conversation text.
const result = await tuteliq.analyzeEmotions({ content: 'Nobody at school talks to me anymore. I just sit alone every day.', context: { ageGroup: '13-15' },});console.log(result.dominant_emotions); // ["sadness", "loneliness"]console.log(result.emotion_scores); // { sadness: 0.87, loneliness: 0.75, ... }console.log(result.trend); // "worsening"console.log(result.recommended_followup); // "Check in about school relationships..."
These methods cover financial exploitation, romance scams, and coercive behaviour targeting minors. Other endpoints — detectAppFraud, detectMuleRecruitment, detectGamblingHarm, detectCoerciveControl, and detectRadicalisation — follow the same call pattern shown here.
Identify manipulation tactics designed to trick a child into disclosing information or taking unsafe actions.
const result = await tuteliq.detectSocialEngineering({ content: "If you really trusted me you'd send me your home address. All my real friends do.", context: { ageGroup: '10-12' },});console.log(result.detected); // trueconsole.log(result.categories); // [{ tag: "TRUST_EXPLOITATION", label: "Trust Exploitation", confidence: 0.92 }]console.log(result.risk_score); // 0.88console.log(result.recommended_action); // "Block and report to platform administrators"
Analyze text for romantic manipulation patterns that may indicate an adult posing as a peer.
const result = await tuteliq.detectRomanceScam({ content: "I've never felt this way about anyone before. You're so mature for your age. Keep us a secret.", context: { ageGroup: '13-15' },});console.log(result.detected); // trueconsole.log(result.risk_score); // 0.91console.log(result.categories); // [{ tag: "LOVE_BOMBING", label: "Love Bombing", confidence: 0.93 }]console.log(result.recommended_action); // "Immediate intervention recommended"
Detect attempts to identify and target emotional or situational vulnerabilities in a child.
const result = await tuteliq.detectVulnerabilityExploitation({ content: "I know you said your parents don't listen to you. I'm different — I actually care. You can tell me anything.", context: { ageGroup: '13-15' },});console.log(result.detected); // trueconsole.log(result.risk_score); // 0.85console.log(result.categories); // [{ tag: "EMOTIONAL_EXPLOITATION", label: "Emotional Exploitation", confidence: 0.88 }]console.log(result.recommended_action); // "Flag for moderator review"
Run multiple detection endpoints on a single piece of content in one API call.
const result = await tuteliq.analyseMulti({ content: "You're so special. Nobody else understands you like I do. Keep this a secret.", detections: ['social-engineering', 'romance-scam', 'grooming'], context: { ageGroup: '13-15' },});console.log(result.summary.highest_risk); // "critical"console.log(result.summary.total_credits_used); // 3console.log(result.results.length); // 3
analyseMulti is billed per detection endpoint, not per request.