Unified real-time intelligence pipeline for The Regulated Friction Project.
Merges the node-level tracking pipeline (Live Trackers v1.x) with the signal-level intelligence pipeline (The Regulated Friction Project's daily_perplexity_update.py) into a single, clean automated system. Includes entity disambiguation to prevent single-topic hyper-focus, signal-level tracking across 7 active signals, breaking news scanning across tracked entities, and prediction verification for pending forecasts.
Author: Austin Smith | 19D Cavalry Scout (OSINT Methodology)
┌─────────────────────┐ ┌──────────────────────┐ ┌──────────────────────┐
│ node_tracker.py │────▶│ entity_extractor.py │────▶│convergence_detector.py│
│ (Perplexity Pro) │ │ (Llama Scout 17B) │ │ (Local Analysis) │
└─────────────────────┘ └──────────────────────┘ └──────────────────────┘
│ │ │
▼ ▼ ▼
node_status.json extracted_entities.json convergence_report.json
┌─────────────────────────┐ ┌────────────────────────┐
│ daily_intelligence.py │────▶│ fact_checker.py │
│ (Perplexity Pro) │ │ (Anthropic Claude) │
└─────────────────────────┘ └────────────────────────┘
│ │
▼ ▼
daily_intelligence.json fact_check.json
+ live_verification.json │
▼
docs/data/ → regulatedfriction.me
Stage 1 — Node Tracker: Queries Perplexity sonar-pro for the current status of each leverage node. Classifies events as FRICTION, COMPLIANCE, ESCALATION, or DE_ESCALATION. Includes entity disambiguation to prevent model confusion between similarly named organisations and initiatives.
Stage 2 — Entity Extractor: Sends node status reports to Llama-4-Scout-17B for structured extraction — named entities, relationships, temporal markers, and cross-node connections. Enhanced with disambiguation and 16K token output.
Stage 3 — Convergence Detector: Reads historical tracker runs and detects convergence patterns — windows where 3+ nodes show simultaneous activity, consistent with the thermostat model's 7-day convergence window.
Stage 4 — Daily Intelligence: (Merged from The Regulated Friction Project) Tracks 8 active signals individually, scans breaking news across all 59 tracked entities, generates a prioritized daily summary, and verifies pending predictions. Prompt engineering explicitly prevents single-signal hyper-focus.
Stage 5 — Fact Checker: Sends verifiable claims from pipeline output to Anthropic Claude Sonnet for factual verification. Flags and corrects incorrect dates, misattributed actions, and fabricated events in-place, so the dashboard always shows verified content.
| Node | Type | What It Tracks |
|---|---|---|
maxwell |
Information | Clemency negotiations, House Oversight, 5th Amendment invocation, habeas corpus petition |
iran |
Kinetic | US-Iran war status, Khamenei succession, Strait of Hormuz closure, Iran retaliation, energy crisis |
gulf_swf |
Capital | PIF, Mubadala, MGX positioning under wartime stress, Strait of Hormuz energy impact on $4.9T AUM |
israel |
Kinetic | Cyber/intelligence operations, Lebanon front, Abraham Accords capital bridge, Board of Peace |
oracle_ellison |
Capital | Oracle financial stress ($45-50B raise), Stargate contraction, Ellison WBD guarantee |
epstein_files |
Information | DOJ releases, FBI 302 interview summaries, Congressional subpoenas, Clinton depositions |
arkansas_datacenter |
Capital | State-level preemption (Good Day Farm), $17B+ datacenter deployment, utility rate capture |
pip install -r requirements.txtCreate a .env file (never commit this):
PERPLEXITY_API_KEY=your_key_here
LLAMA_SCOUT_KEY=your_key_here
# Full pipeline (all four stages)
chmod +x run_pipeline.sh
./run_pipeline.sh
# Individual stages
python node_tracker.py # Stage 1 only
python entity_extractor.py # Stage 2 only (requires Stage 1 output)
python convergence_detector.py # Stage 3 only (requires Stage 1 history)
python daily_intelligence.py # Stage 4 only (signal tracking + predictions)
# Single node
python node_tracker.py --node maxwell
# Dry run (preview queries)
python node_tracker.py --dry-run
python daily_intelligence.py --dry-run| File | Content |
|---|---|
output/node_status.json |
Current status of all tracked nodes (overwritten each run) |
output/extracted_entities.json |
Structured entities, relationships, and cross-node connections |
output/convergence_report.json |
Multi-node convergence analysis with friction/compliance pairs |
output/daily_intelligence.json |
Signal status + breaking news + daily summary |
output/live_verification.json |
Prediction verification results |
output/fact_check.json |
Anthropic Claude fact-verification results |
output/history/tracker_*.json |
Timestamped copies of every tracker run |
output/history/extraction_*.json |
Timestamped copies of every extraction run |
output/history/intel_*.json |
Timestamped copies of every intelligence run |
The project includes a static dashboard deployed to regulatedfriction.me via GitHub Pages. Pipeline data is automatically published after each run.
The dashboard shows:
- Node Status — Live status cards with classification badges, confidence levels, and entity tags
- Intelligence — Daily intelligence summary with signal status, breaking news, top developments, and priority watchlist
- Convergence Analysis — Multi-node convergence windows, friction/compliance pairs, and node activity summary
- Predictions — Pending prediction verification results with status tracking
- Entity Extraction — Structured entities, relationships, and cross-node connections
- History — Timeline of pipeline runs and convergence events
- Fact-Check Banner — Anthropic Claude verification results when available
Everything runs through GitHub Actions — no external servers required:
- Push to
main→ validates code, checks for secrets, compiles pipeline scripts - Scheduled pipeline → runs the tracker pipeline twice daily (08:00 / 20:00 UTC), publishes to dashboard
- Manual trigger → run either workflow on-demand from the Actions tab
See SETUP.md for complete configuration instructions.
See SECRETS_REPORT.md for required repository secrets.
| Secret | Description |
|---|---|
PERPLEXITY_API_KEY |
Perplexity API key (for Stage 1 + Stage 4) |
LLAMA_SCOUT_KEY |
GitHub Models API key (for Stage 2 — Entity Extraction) |
ANTHROPIC_API_KEY |
Anthropic API key (for Stage 5 — Fact Verification via Claude Sonnet) |
Live_Trackers/
├── docs/ # Static dashboard (GitHub Pages)
│ ├── index.html # Dashboard UI
│ ├── CNAME # Custom domain (regulatedfriction.me)
│ └── data/ # Pipeline output (auto-published)
├── tracker_config.json # Unified config: nodes + signals + disambiguation
├── node_tracker.py # Stage 1: Perplexity node status monitoring
├── entity_extractor.py # Stage 2: Llama Scout entity extraction
├── convergence_detector.py # Stage 3: Multi-node convergence detection
├── daily_intelligence.py # Stage 4: Signal tracking + breaking news + predictions
├── fact_checker.py # Stage 5: Anthropic Claude fact verification
├── run_pipeline.sh # Full pipeline runner
├── requirements.txt # Python dependencies
├── requirements_live_trackers.txt # Pipeline-only dependencies
├── SETUP.md # GitHub Actions setup guide
├── SECRETS_REPORT.md # Required secrets documentation
├── .github/workflows/
│ ├── deploy.yml # CI/CD: validate on push to main
│ └── run_pipeline.yml # Scheduled pipeline (5 stages + dashboard publish)
├── _AI_CONTEXT_INDEX/ # Synced from The Regulated Friction Project
├── output/ # Pipeline outputs (gitignored, force-added)
│ ├── node_status.json
│ ├── extracted_entities.json
│ ├── convergence_report.json
│ ├── daily_intelligence.json
│ ├── live_verification.json
│ ├── fact_check.json
│ └── history/ # Timestamped run history
├── .env # API keys (gitignored)
└── .gitignore
This repository is the unified operational monitoring layer for the analytical framework documented in The Regulated Friction Project. The _AI_CONTEXT_INDEX/ directory is synced from the parent project via GitHub Actions and provides the knowledge base context.
v2.0 Merge: The daily_intelligence.py pipeline was ported from daily_perplexity_update.py in The Regulated Friction Project. The tracker_config.json now includes signal definitions, tracked entities, and entity disambiguation from intelligence_config.json. This eliminates the need to maintain two separate pipelines.
Key framework reference:
- Core correlation: r = +0.6196 (p = 0.0004) between friction and compliance events
- Convergence window: 7-day median lag
- Active leverage nodes: documented in
09_CURRENT_THREADS.md - Capital architecture: documented in
04_CAPITAL_ARCHITECTURE.md
The pipeline includes daily API budget trackers to prevent runaway costs:
- Node Tracker (Stage 1): 50 calls/day default (tracked in
.api_budget.json) - Daily Intelligence (Stage 4): 75 calls/day (tracked in
.api_budget_intel.json) - Llama Scout (Stage 2): 1 call per pipeline run (extracts from all nodes at once)
- Convergence detection (Stage 3): No API calls (local analysis only)
First line of Python: October 2025. Everything above: built since then.