Detect AI-generated content with explainable evidence across text, image, audio, and video. Built for fact-checking, newsroom workflows, trust-and-safety teams, and investigation support.
Live Demo | API Docs | Benchmark & Status | Open-Core Model
- Upload a file or paste a public URL.
- Run detection with modality-aware analysis.
- Review an explainable evidence card (verdict, confidence, timestamp, analysis ID).
| Modality | Status | Primary Entry |
|---|---|---|
| Text | Stable | /api/v1/detect/text, /api/v1/detect/url |
| Image | Stable | /api/v1/detect/image, /api/v1/detect/url |
| Audio | Experimental | /api/v1/detect/audio |
| Video | Experimental | /api/v1/detect/video, /api/v1/detect/url |
This is: an evidence-first triage system for authenticity analysis.
This is not: legal proof or a sole-decision engine for high-stakes outcomes.
- Methodology limits: docs/METHODOLOGY_LIMITATIONS.md
- Known operational status: docs/ROADMAP_STATUS.md
- Public/private repo split: docs/OPEN_CORE.md
flowchart LR
A[Input: text, image, audio, video, URL] --> B[Router]
B --> C[Internal detector by modality]
C --> D[Calibration and decision band]
D --> E[Provider consensus]
E --> F[Evidence card + audit event]
F --> G[API response + history]
sequenceDiagram
participant U as Analyst
participant W as whoisfake.com
participant API as API
participant ENG as Detection Engine
participant AUD as Audit Store
U->>W: Upload or paste URL
W->>API: Detection request
API->>ENG: Run modality analysis
ENG-->>API: Verdict + confidence + signals
API->>AUD: Save analysis_id, timestamp, evidence
API-->>W: Explainable result card
W-->>U: Shareable evidence summary
{
"analysis_id": "a4f2d7e3-...",
"content_type": "text",
"result": {
"decision_band": "uncertain",
"is_ai_generated": false,
"confidence": 0.57,
"model_version": "text-detector:distilroberta-v1",
"calibration_version": "calibrated-20260312:general"
},
"timestamp": "2026-03-19T12:31:00Z"
}git clone https://github.com/ogulcanaydogan/AI-Provenance-Tracker.git
cd AI-Provenance-Tracker
cp backend/.env.example backend/.env
make up- Frontend:
http://localhost:3000 - API:
http://localhost:8000 - Swagger:
http://localhost:8000/docs - URL analysis UI:
http://localhost:3000/detect/url
curl -X POST "https://api.whoisfake.com/api/v1/detect/url" \
-H "Content-Type: application/json" \
-d '{"url":"https://example.com"}'curl -X POST "https://api.whoisfake.com/api/v1/detect/text" \
-H "Content-Type: application/json" \
-d '{"text":"Your content to analyze"}'- API reference: docs/API.md
- Deployment: docs/DEPLOYMENT.md
- Roadmap and run evidence: docs/ROADMAP_STATUS.md
- Open-core boundary: docs/OPEN_CORE.md
- Architecture details: docs/ARCHITECTURE.md
About text (recommended): Open-source multimodal AI provenance detection platform with explainable evidence cards, provider consensus, and benchmark-driven quality gates.
Topics (recommended):
ai-detection, fact-checking, fastapi, nextjs, provenance, multimodal
WhoisFake / AI Provenance Tracker; metin, gorsel, ses ve video iceriklerde AI-uretim sinyallerini aciklanabilir sekilde raporlar.
- Ne yapar: analiz + kanit karti + izlenebilirlik
- Nasil baslanir: URL yapistir veya dosya yukle, sonucu paylas
- Sinirlar: olasiliksal sonuc uretir; tek basina hukuki kanit degildir
MIT - see LICENSE.