Scan a single text input for harmful content across all KOSA categories.
val result = tuteliq.detectUnsafe( text = "Let's meet at the park after school, don't tell your parents", ageGroup = AgeGroup.TEN_TO_TWELVE)println(result.safe) // falseprintln(result.severity) // Severity.HIGHprintln(result.categories) // [Category.GROOMING, Category.SECRECY]
Analyze a conversation history for grooming indicators.
val result = tuteliq.detectGrooming( messages = listOf( Message(role = Role.STRANGER, text = "Hey, how old are you?"), Message(role = Role.CHILD, text = "I'm 11"), Message(role = Role.STRANGER, text = "Cool. Do you have your own phone?"), Message(role = Role.STRANGER, text = "Let's talk on a different app, just us"), ), ageGroup = AgeGroup.TEN_TO_TWELVE)println(result.groomingDetected) // trueprintln(result.riskScore) // 0.92println(result.stage) // GroomingStage.ISOLATION
Evaluate emotional well-being from conversation text.
val result = tuteliq.analyzeEmotions( text = "Nobody at school talks to me anymore. I just sit alone every day.", ageGroup = AgeGroup.THIRTEEN_TO_FIFTEEN)println(result.emotions) // [Emotion(label="sadness", score=0.87), ...]println(result.distress) // trueprintln(result.riskLevel) // RiskLevel.ELEVATED
These methods cover financial exploitation, romance scams, and coercive behaviour targeting minors. Other endpoints — detectAppFraud, detectMuleRecruitment, detectGamblingHarm, detectCoerciveControl, and detectRadicalisation — follow the same call pattern shown here.
Identify manipulation tactics designed to trick a child into disclosing information or taking unsafe actions.
val result = tuteliq.detectSocialEngineering( text = "If you really trusted me you'd send me your home address. All my real friends do.", ageGroup = AgeGroup.TEN_TO_TWELVE)println(result.detected) // trueprintln(result.tactics) // [Tactic.TRUST_EXPLOITATION, Tactic.PEER_PRESSURE]println(result.riskScore) // 0.88
Analyze conversation text for romantic manipulation patterns that may indicate an adult posing as a peer.
val result = tuteliq.detectRomanceScam( messages = listOf( Message(role = Role.STRANGER, text = "I've never felt this way about anyone before. You're so mature for your age."), Message(role = Role.CHILD, text = "Really? That makes me really happy."), Message(role = Role.STRANGER, text = "I need you to keep us a secret. People wouldn't understand."), ), ageGroup = AgeGroup.THIRTEEN_TO_FIFTEEN)println(result.detected) // trueprintln(result.riskScore) // 0.91println(result.indicators) // [Indicator.LOVE_BOMBING, Indicator.SECRECY_REQUEST, Indicator.AGE_FLATTERY]
Detect attempts to identify and target emotional or situational vulnerabilities in a child.
val result = tuteliq.detectVulnerabilityExploitation( text = "I know you said your parents don't listen to you. I'm different — I actually care. You can tell me anything.", ageGroup = AgeGroup.THIRTEEN_TO_FIFTEEN)println(result.detected) // trueprintln(result.riskScore) // 0.85println(result.vulnerabilities) // [Vulnerability.PARENTAL_CONFLICT, Vulnerability.EMOTIONAL_NEGLECT]
Run any supported detection across multiple texts in a single API call to reduce round-trips.
val result = tuteliq.analyseMulti( inputs = listOf( AnalyseMultiInput(text = "You're so special. Nobody else understands you like I do.", ageGroup = AgeGroup.THIRTEEN_TO_FIFTEEN), AnalyseMultiInput(text = "Can you keep a secret from your mum?", ageGroup = AgeGroup.TEN_TO_TWELVE), ), detections = listOf(Detection.SOCIAL_ENGINEERING, Detection.ROMANCE_SCAM, Detection.GROOMING))println(result.results[0].detections) // AnalyseMultiDetections(socialEngineering=..., ...)println(result.results[1].detections) // AnalyseMultiDetections(grooming=..., ...)
analyseMulti is billed per individual input × detection combination, not per request.