Scan a single text input for harmful content across all KOSA categories.
result = client.detect_unsafe( text="Let's meet at the park after school, don't tell your parents", age_group="10-12",)print(result.safe) # Falseprint(result.severity) # "high"print(result.categories) # ["grooming", "secrecy"]
Analyze a conversation history for grooming indicators.
result = client.detect_grooming( messages=[ {"role": "stranger", "text": "Hey, how old are you?"}, {"role": "child", "text": "I'm 11"}, {"role": "stranger", "text": "Cool. Do you have your own phone?"}, {"role": "stranger", "text": "Let's talk on a different app, just us"}, ], age_group="10-12",)print(result.grooming_detected) # Trueprint(result.risk_score) # 0.92print(result.stage) # "isolation"
Evaluate emotional well-being from conversation text.
result = client.analyze_emotions( text="Nobody at school talks to me anymore. I just sit alone every day.", age_group="13-15",)print(result.emotions) # [{"label": "sadness", "score": 0.87}, ...]print(result.distress) # Trueprint(result.risk_level) # "elevated"
These methods cover financial exploitation, romance scams, and coercive behaviour targeting minors. Other endpoints — detect_app_fraud, detect_mule_recruitment, detect_gambling_harm, detect_coercive_control, and detect_radicalisation — follow the same call pattern shown here.
Identify manipulation tactics designed to trick a child into disclosing information or taking unsafe actions.
result = client.detect_social_engineering( text="If you really trusted me you'd send me your home address. All my real friends do.", age_group="10-12",)print(result.detected) # Trueprint(result.tactics) # ["trust_exploitation", "peer_pressure"]print(result.risk_score) # 0.88
Analyze conversation text for romantic manipulation patterns that may indicate an adult posing as a peer.
result = client.detect_romance_scam( messages=[ {"role": "stranger", "text": "I've never felt this way about anyone before. You're so mature for your age."}, {"role": "child", "text": "Really? That makes me really happy."}, {"role": "stranger", "text": "I need you to keep us a secret. People wouldn't understand."}, ], age_group="13-15",)print(result.detected) # Trueprint(result.risk_score) # 0.91print(result.indicators) # ["love_bombing", "secrecy_request", "age_flattery"]
Detect attempts to identify and target emotional or situational vulnerabilities in a child.
result = client.detect_vulnerability_exploitation( text="I know you said your parents don't listen to you. I'm different — I actually care. You can tell me anything.", age_group="13-15",)print(result.detected) # Trueprint(result.risk_score) # 0.85print(result.vulnerabilities) # ["parental_conflict", "emotional_neglect"]
Run any supported detection across multiple texts in a single API call to reduce round-trips.
result = client.analyse_multi( inputs=[ {"text": "You're so special. Nobody else understands you like I do.", "age_group": "13-15"}, {"text": "Can you keep a secret from your mum?", "age_group": "10-12"}, ], detections=["social-engineering", "romance-scam", "grooming"],)print(result.results[0].detections) # {"social_engineering": {"detected": True, ...}, ...}print(result.results[1].detections) # {"grooming": {"detected": True, ...}, ...}
analyse_multi is billed per individual input × detection combination, not per request.