The Tuteliq Swift SDK provides a native client for the Tuteliq child safety API using Swift concurrency (async/await). It supports iOS 15.0+ and macOS 12.0+.
Scan a single text input for harmful content across all KOSA categories.
let result = try await tuteliq.detectUnsafe( text: "Let's meet at the park after school, don't tell your parents", ageGroup: .tenToTwelve)print(result.safe) // falseprint(result.severity) // .highprint(result.categories) // [.grooming, .secrecy]
Analyze a conversation history for grooming indicators.
let messages: [Message] = [ Message(role: .stranger, text: "Hey, how old are you?"), Message(role: .child, text: "I'm 11"), Message(role: .stranger, text: "Cool. Do you have your own phone?"), Message(role: .stranger, text: "Let's talk on a different app, just us"),]let result = try await tuteliq.detectGrooming( messages: messages, ageGroup: .tenToTwelve)print(result.groomingDetected) // trueprint(result.riskScore) // 0.92print(result.stage) // .isolation
Evaluate emotional well-being from conversation text.
let result = try await tuteliq.analyzeEmotions( text: "Nobody at school talks to me anymore. I just sit alone every day.", ageGroup: .thirteenToFifteen)print(result.emotions) // [Emotion(label: "sadness", score: 0.87), ...]print(result.distress) // trueprint(result.riskLevel) // .elevated
These methods cover financial exploitation, romance scams, and coercive behaviour targeting minors. Other endpoints — detectAppFraud, detectMuleRecruitment, detectGamblingHarm, detectCoerciveControl, and detectRadicalisation — follow the same call pattern shown here.
Identify manipulation tactics designed to trick a child into disclosing information or taking unsafe actions.
let result = try await tuteliq.detectSocialEngineering( text: "If you really trusted me you'd send me your home address. All my real friends do.", ageGroup: .tenToTwelve)print(result.detected) // trueprint(result.tactics) // [.trustExploitation, .peerPressure]print(result.riskScore) // 0.88
Analyze conversation text for romantic manipulation patterns that may indicate an adult posing as a peer.
let messages: [Message] = [ Message(role: .stranger, text: "I've never felt this way about anyone before. You're so mature for your age."), Message(role: .child, text: "Really? That makes me really happy."), Message(role: .stranger, text: "I need you to keep us a secret. People wouldn't understand."),]let result = try await tuteliq.detectRomanceScam( messages: messages, ageGroup: .thirteenToFifteen)print(result.detected) // trueprint(result.riskScore) // 0.91print(result.indicators) // [.loveBombing, .secrecyRequest, .ageFlattery]
Detect attempts to identify and target emotional or situational vulnerabilities in a child.
let result = try await tuteliq.detectVulnerabilityExploitation( text: "I know you said your parents don't listen to you. I'm different — I actually care. You can tell me anything.", ageGroup: .thirteenToFifteen)print(result.detected) // trueprint(result.riskScore) // 0.85print(result.vulnerabilities) // [.parentalConflict, .emotionalNeglect]
Run any supported detection across multiple texts in a single API call to reduce round-trips.
let result = try await tuteliq.analyseMulti( inputs: [ AnalyseMultiInput(text: "You're so special. Nobody else understands you like I do.", ageGroup: .thirteenToFifteen), AnalyseMultiInput(text: "Can you keep a secret from your mum?", ageGroup: .tenToTwelve), ], detections: [.socialEngineering, .romanceScam, .grooming])print(result.results[0].detections) // AnalyseMultiDetections(socialEngineering: ..., ...)print(result.results[1].detections) // AnalyseMultiDetections(grooming: ..., ...)
analyseMulti is billed per individual input × detection combination, not per request.