Skip to content
/ cli Public

Command-line interface for Tuteliq — detect bullying, grooming, and unsafe content from your terminal

Notifications You must be signed in to change notification settings

Tuteliq/cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tuteliq

Tuteliq CLI

AI-powered child safety analysis from your terminal
Detect bullying, grooming, and unsafe content

Installation

Homebrew (macOS/Linux)

brew install tuteliq/tap/tuteliq

npm

npm install -g @tuteliq/cli

Quick Start

# Login with your API key
tuteliq login <your-api-key>

# Analyze text for safety
tuteliq analyze "Some text to check"

# Detect bullying
tuteliq detect-bullying "You're so stupid"

# Detect unsafe content
tuteliq detect-unsafe "I want to hurt myself"

Commands

Authentication

tuteliq login <api-key>   # Save your API key
tuteliq logout            # Remove saved API key
tuteliq whoami            # Show login status

Safety Detection

# Quick analysis (bullying + unsafe)
tuteliq analyze "Text to analyze"

# Detect bullying/harassment
tuteliq detect-bullying "Text to check"
tuteliq bullying "Text to check"  # alias

# Detect unsafe content (self-harm, violence, etc.)
tuteliq detect-unsafe "Text to check"
tuteliq unsafe "Text to check"  # alias

# Detect grooming patterns in conversation
tuteliq detect-grooming -m '[{"role":"adult","content":"..."},{"role":"child","content":"..."}]'
tuteliq grooming -m '...' --age 12  # with child age

Analysis & Guidance

# Analyze emotions
tuteliq emotions "I'm feeling really sad today"

# Get action plan for a situation
tuteliq action-plan "Child is being bullied at school"
tuteliq plan "..." --age 12 --audience parent --severity high

Voice & Image Analysis

# Analyze audio for safety concerns
tuteliq voice recording.mp3
tuteliq voice call.wav --type bullying --language en

# Analyze image for safety concerns
tuteliq image screenshot.png
tuteliq image photo.jpg --type unsafe

Webhook Management

tuteliq webhook list
tuteliq webhook create -n "Safety Alerts" -u https://example.com/webhook -e "incident.critical,grooming.detected"
tuteliq webhook update <id> --disable
tuteliq webhook test <id>
tuteliq webhook delete <id>
tuteliq webhook regenerate-secret <id>

Pricing & Usage

tuteliq pricing
tuteliq pricing --details

tuteliq usage monthly
tuteliq usage history --days 14
tuteliq usage by-tool --date 2026-02-13

Examples

Check a message for bullying

$ tuteliq bullying "You're worthless and nobody likes you"

⚠ BULLYING DETECTED

Severity:    HIGH
Confidence:  92%
Risk Score:  85%
Types:       verbal, exclusion

Rationale:
The message contains degrading language and exclusionary statements...

Action: flag_for_moderator

Analyze emotional content

$ tuteliq emotions "I don't want to go to school anymore, everyone hates me"

Emotion Analysis

Dominant:  sadness, anxiety, isolation
Trend:     📉 worsening

Summary:
The text indicates feelings of social rejection and school avoidance...

Recommended Follow-up:
Consider having a supportive conversation about their school experience...

Get an API Key

Sign up at tuteliq.ai to get your API key.

Best Practices

Message Batching

The bullying and unsafe commands analyze a single text input per request. If you're analyzing a conversation, concatenate a sliding window of recent messages into one string rather than piping each message individually. Single words or short fragments lack context for accurate detection and can be exploited to bypass safety filters.

# Bad — each line analyzed in isolation
cat messages.txt | while read line; do tuteliq bullying "$line"; done

# Good — analyze the full conversation
tuteliq bullying "$(cat messages.txt)"

The grooming command already accepts multiple messages and analyzes the full conversation in context.

PII Redaction

Enable PII_REDACTION_ENABLED=true on your Tuteliq API to automatically strip emails, phone numbers, URLs, social handles, IPs, and other PII from detection summaries and webhook payloads. The original text is still analyzed in full — only stored outputs are scrubbed.

License

MIT


The Mission: Why This Matters

Before you decide to contribute or sponsor, read these numbers. They are not projections. They are not estimates from a pitch deck. They are verified statistics from the University of Edinburgh, UNICEF, NCMEC, and Interpol.

  • 302 million children are victims of online sexual exploitation and abuse every year. That is 10 children every second. (Childlight / University of Edinburgh, 2024)
  • 1 in 8 children globally have been victims of non-consensual sexual imagery in the past year. (Childlight, 2024)
  • 370 million girls and women alive today experienced rape or sexual assault in childhood. An estimated 240–310 million boys and men experienced the same. (UNICEF, 2024)
  • 29.2 million incidents of suspected child sexual exploitation were reported to NCMEC's CyberTipline in 2024 alone — containing 62.9 million files (images, videos). (NCMEC, 2025)
  • 546,000 reports of online enticement (adults grooming children) in 2024 — a 192% increase from the year before. (NCMEC, 2025)
  • 1,325% increase in AI-generated child sexual abuse material reports between 2023 and 2024. The technology that should protect children is being weaponized against them. (NCMEC, 2025)
  • 100 sextortion reports per day to NCMEC. Since 2021, at least 36 teenage boys have taken their own lives because they were victimized by sextortion. (NCMEC, 2025)
  • 84% of reports resolve outside the United States. This is not an American problem. This is a global emergency. (NCMEC, 2025)

End-to-end encryption is making platforms blind. In 2024, platforms reported 7 million fewer incidents than the year before — not because abuse stopped, but because they can no longer see it. The tools that catch known images are failing. The systems that rely on human moderators are overwhelmed. The technology to detect behavior — grooming patterns, escalation, manipulation — in real-time text conversations exists right now. It is running at api.tuteliq.ai.

The question is not whether this technology is possible. The question is whether we build the company to put it everywhere it needs to be.

Every second we wait, another child is harmed.

We have the technology. We need the support.

If this mission matters to you, consider sponsoring our open-source work so we can keep building the tools that protect children — and keep them free and accessible for everyone.


Built with care for child safety by the Tuteliq team

About

Command-line interface for Tuteliq — detect bullying, grooming, and unsafe content from your terminal

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published