Skip to main content
The Tuteliq Swift SDK provides a native client for the Tuteliq child safety API using Swift concurrency (async/await). It supports iOS 15.0+ and macOS 12.0+.

Installation

Add the Tuteliq package via Swift Package Manager in Xcode:
  1. Go to File > Add Package Dependencies.
  2. Enter the repository URL:
https://github.com/Tuteliq/swift
  1. Select your desired version rule and add the package to your target.
Alternatively, add it to your Package.swift:
dependencies: [
    .package(url: "https://github.com/Tuteliq/swift", from: "1.0.0"),
],
targets: [
    .target(
        name: "YourApp",
        dependencies: [
            .product(name: "Tuteliq", package: "swift"),
        ]
    ),
]

Deployment targets

PlatformMinimum Version
iOS15.0
macOS12.0

Initialize the client

import Tuteliq

let tuteliq = Tuteliq(apiKey: "YOUR_API_KEY")
Never hardcode API keys in source code. Store them in the Keychain, Xcode build configuration, or a secrets manager.
let tuteliq = Tuteliq(apiKey: Configuration.tuteliqApiKey)

Detect unsafe content

Scan a single text input for harmful content across all KOSA categories.
let result = try await tuteliq.detectUnsafe(
    text: "Let's meet at the park after school, don't tell your parents",
    ageGroup: .tenToTwelve
)

print(result.safe)        // false
print(result.severity)    // .high
print(result.categories)  // [.grooming, .secrecy]

Detect grooming patterns

Analyze a conversation history for grooming indicators.
let messages: [Message] = [
    Message(role: .stranger, text: "Hey, how old are you?"),
    Message(role: .child, text: "I'm 11"),
    Message(role: .stranger, text: "Cool. Do you have your own phone?"),
    Message(role: .stranger, text: "Let's talk on a different app, just us"),
]

let result = try await tuteliq.detectGrooming(
    messages: messages,
    ageGroup: .tenToTwelve
)

print(result.groomingDetected) // true
print(result.riskScore)        // 0.92
print(result.stage)            // .isolation

Analyze emotions

Evaluate emotional well-being from conversation text.
let result = try await tuteliq.analyzeEmotions(
    text: "Nobody at school talks to me anymore. I just sit alone every day.",
    ageGroup: .thirteenToFifteen
)

print(result.emotions)   // [Emotion(label: "sadness", score: 0.87), ...]
print(result.distress)   // true
print(result.riskLevel)  // .elevated

Analyze voice

Upload an audio file for transcription and safety analysis.
let audioURL = Bundle.main.url(forResource: "recording", withExtension: "wav")!
let audioData = try Data(contentsOf: audioURL)

let result = try await tuteliq.analyzeVoice(
    file: audioData,
    ageGroup: .thirteenToFifteen
)

print(result.transcript)
print(result.safe)
print(result.emotions)

Fraud detection and safety extended

These methods cover financial exploitation, romance scams, and coercive behaviour targeting minors. Other endpoints — detectAppFraud, detectMuleRecruitment, detectGamblingHarm, detectCoerciveControl, and detectRadicalisation — follow the same call pattern shown here.

Detect social engineering

Identify manipulation tactics designed to trick a child into disclosing information or taking unsafe actions.
let result = try await tuteliq.detectSocialEngineering(
    text: "If you really trusted me you'd send me your home address. All my real friends do.",
    ageGroup: .tenToTwelve
)

print(result.detected)    // true
print(result.tactics)     // [.trustExploitation, .peerPressure]
print(result.riskScore)   // 0.88

Detect romance scam

Analyze conversation text for romantic manipulation patterns that may indicate an adult posing as a peer.
let messages: [Message] = [
    Message(role: .stranger, text: "I've never felt this way about anyone before. You're so mature for your age."),
    Message(role: .child, text: "Really? That makes me really happy."),
    Message(role: .stranger, text: "I need you to keep us a secret. People wouldn't understand."),
]

let result = try await tuteliq.detectRomanceScam(
    messages: messages,
    ageGroup: .thirteenToFifteen
)

print(result.detected)    // true
print(result.riskScore)   // 0.91
print(result.indicators)  // [.loveBombing, .secrecyRequest, .ageFlattery]

Detect vulnerability exploitation

Detect attempts to identify and target emotional or situational vulnerabilities in a child.
let result = try await tuteliq.detectVulnerabilityExploitation(
    text: "I know you said your parents don't listen to you. I'm different — I actually care. You can tell me anything.",
    ageGroup: .thirteenToFifteen
)

print(result.detected)         // true
print(result.riskScore)        // 0.85
print(result.vulnerabilities)  // [.parentalConflict, .emotionalNeglect]

Analyse multiple texts in one request

Run any supported detection across multiple texts in a single API call to reduce round-trips.
let result = try await tuteliq.analyseMulti(
    inputs: [
        AnalyseMultiInput(text: "You're so special. Nobody else understands you like I do.", ageGroup: .thirteenToFifteen),
        AnalyseMultiInput(text: "Can you keep a secret from your mum?", ageGroup: .tenToTwelve),
    ],
    detections: [.socialEngineering, .romanceScam, .grooming]
)

print(result.results[0].detections)  // AnalyseMultiDetections(socialEngineering: ..., ...)
print(result.results[1].detections)  // AnalyseMultiDetections(grooming: ..., ...)
analyseMulti is billed per individual input × detection combination, not per request.

Error handling

The SDK throws typed TuteliqError values that you can pattern-match on.
do {
    let result = try await tuteliq.detectUnsafe(
        text: "some content",
        ageGroup: .tenToTwelve
    )
} catch let error as TuteliqError {
    print(error.code)    // e.g. .authInvalidKey
    print(error.message) // human-readable description
    print(error.status)  // HTTP status code
} catch {
    print("Unexpected error: \(error)")
}

Configuration options

let tuteliq = Tuteliq(
    apiKey: Configuration.tuteliqApiKey,
    baseURL: URL(string: "https://api.tuteliq.ai")!, // default
    timeout: 30,                                       // request timeout in seconds
    retries: 2                                         // automatic retries on failure
)

SwiftUI integration

The SDK works seamlessly with SwiftUI. All methods are async and can be called directly from .task modifiers or @MainActor contexts.
struct ContentModerationView: View {
    @State private var isSafe: Bool?
    let tuteliq = Tuteliq(apiKey: Configuration.tuteliqApiKey)

    var body: some View {
        Text(isSafe == true ? "Content is safe" : "Checking...")
            .task {
                let result = try? await tuteliq.detectUnsafe(
                    text: messageText,
                    ageGroup: .thirteenToFifteen
                )
                isSafe = result?.safe
            }
    }
}

Next steps

API Reference

Explore the full API specification.

Node.js SDK

See the Node.js SDK guide.