Skip to main content
The Tuteliq React Native SDK provides a typed client for the Tuteliq child safety API in React Native applications. It supports React Native 0.70+ with full TypeScript definitions.

Installation

npm install @tuteliq/react-native
Or with Yarn:
yarn add @tuteliq/react-native

Initialize the client

import { Tuteliq } from '@tuteliq/react-native';

const tuteliq = new Tuteliq('YOUR_API_KEY');
Never hardcode API keys in source code. Use react-native-config, environment variables, or a secrets manager.
import Config from 'react-native-config';

const tuteliq = new Tuteliq(Config.TUTELIQ_API_KEY);

Detect unsafe content

Scan a single text input for harmful content across all KOSA categories.
const result = await tuteliq.detectUnsafe({
  content: "Let's meet at the park after school, don't tell your parents",
  context: { ageGroup: '10-12' },
});

console.log(result.unsafe);       // true
console.log(result.severity);     // "high"
console.log(result.categories);   // ["grooming", "secrecy"]
console.log(result.risk_score);   // 0.91

Detect grooming patterns

Analyze a conversation history for grooming indicators.
const result = await tuteliq.detectGrooming({
  messages: [
    { role: 'stranger', content: 'Hey, how old are you?' },
    { role: 'child', content: "I'm 11" },
    { role: 'stranger', content: 'Cool. Do you have your own phone?' },
    { role: 'stranger', content: "Let's talk on a different app, just us" },
  ],
  childAge: 11,
});

console.log(result.grooming_risk);  // "high"
console.log(result.risk_score);     // 0.92
console.log(result.flags);          // ["isolation", "secrecy"]

Analyze emotions

Evaluate emotional well-being from conversation text.
const result = await tuteliq.analyzeEmotions({
  content: 'Nobody at school talks to me anymore. I just sit alone every day.',
  context: { ageGroup: '13-15' },
});

console.log(result.dominant_emotions);    // ["sadness", "loneliness"]
console.log(result.emotion_scores);       // { sadness: 0.87, loneliness: 0.75, ... }
console.log(result.trend);               // "worsening"
console.log(result.recommended_followup); // "Check in about school relationships..."

Analyze voice

Upload an audio file for transcription and safety analysis.
import RNFS from 'react-native-fs';

const audioPath = `${RNFS.DocumentDirectoryPath}/recording.wav`;

const result = await tuteliq.analyzeVoice({
  filePath: audioPath,
  filename: 'recording.wav',
  analysisType: 'all',
  ageGroup: '13-15',
});

console.log(result.transcription.text);   // Full transcript
console.log(result.overall_severity);     // "low" | "medium" | "high" | "critical"
console.log(result.overall_risk_score);   // 0.0 - 1.0

Hook integration

The SDK exports a useTuteliq hook for convenient usage within React components.
import { useTuteliq } from '@tuteliq/react-native';

function SafetyScreen() {
  const tuteliq = useTuteliq();
  const [result, setResult] = useState(null);

  const checkMessage = async (text: string) => {
    const res = await tuteliq.detectUnsafe({
      content: text,
      context: { ageGroup: '13-15' },
    });
    setResult(res);
  };

  return (
    <View>
      <Text>{result?.unsafe === false ? 'Content is safe' : 'Checking...'}</Text>
    </View>
  );
}

Fraud detection and safety extended

These methods cover financial exploitation, romance scams, and coercive behaviour targeting minors. Other endpoints — detectAppFraud, detectMuleRecruitment, detectGamblingHarm, detectCoerciveControl, and detectRadicalisation — follow the same call pattern shown here.

Detect social engineering

Identify manipulation tactics designed to trick a child into disclosing information or taking unsafe actions.
const result = await tuteliq.detectSocialEngineering({
  content: "If you really trusted me you'd send me your home address. All my real friends do.",
  context: { ageGroup: '10-12' },
});

console.log(result.detected);           // true
console.log(result.categories);         // [{ tag: "TRUST_EXPLOITATION", label: "Trust Exploitation", confidence: 0.92 }]
console.log(result.risk_score);         // 0.88
console.log(result.recommended_action); // "Block and report to platform administrators"

Detect romance scam

Analyze text for romantic manipulation patterns that may indicate an adult posing as a peer.
const result = await tuteliq.detectRomanceScam({
  content: "I've never felt this way about anyone before. You're so mature for your age. Keep us a secret.",
  context: { ageGroup: '13-15' },
});

console.log(result.detected);           // true
console.log(result.risk_score);         // 0.91
console.log(result.categories);         // [{ tag: "LOVE_BOMBING", label: "Love Bombing", confidence: 0.93 }]
console.log(result.recommended_action); // "Immediate intervention recommended"

Detect vulnerability exploitation

Detect attempts to identify and target emotional or situational vulnerabilities in a child.
const result = await tuteliq.detectVulnerabilityExploitation({
  content: "I know you said your parents don't listen to you. I'm different — I actually care. You can tell me anything.",
  context: { ageGroup: '13-15' },
});

console.log(result.detected);           // true
console.log(result.risk_score);         // 0.85
console.log(result.categories);         // [{ tag: "EMOTIONAL_EXPLOITATION", label: "Emotional Exploitation", confidence: 0.88 }]
console.log(result.recommended_action); // "Flag for moderator review"

Analyse multiple endpoints in one request

Run multiple detection endpoints on a single piece of content in one API call.
const result = await tuteliq.analyseMulti({
  content: "You're so special. Nobody else understands you like I do. Keep this a secret.",
  detections: ['social-engineering', 'romance-scam', 'grooming'],
  context: { ageGroup: '13-15' },
});

console.log(result.summary.highest_risk);       // "critical"
console.log(result.summary.total_credits_used);  // 3
console.log(result.results.length);              // 3
analyseMulti is billed per detection endpoint, not per request.

Error handling

The SDK throws typed errors that you can catch and inspect.
import { Tuteliq, TuteliqError } from '@tuteliq/react-native';

try {
  const result = await tuteliq.detectUnsafe({
    content: 'some content',
    context: { ageGroup: '10-12' },
  });
} catch (error) {
  if (error instanceof TuteliqError) {
    console.error(error.code);    // e.g. "AUTH_INVALID_KEY"
    console.error(error.message); // human-readable description
    console.error(error.status);  // HTTP status code
  }
}

Configuration options

const tuteliq = new Tuteliq(Config.TUTELIQ_API_KEY, {
  timeout: 30_000,   // request timeout in ms (default: 30000)
  retries: 3,        // automatic retries on transient failures (default: 3)
  retryDelay: 1000,  // initial retry delay in ms (default: 1000)
});

Next steps

API Reference

Explore the full API specification.

Node.js SDK

See the Node.js SDK guide.