Skip to content

feat: LangGraph viral YouTube Short generator — 4-agent pipeline with BMasterAI telemetry#49

Merged
travis-burmaster merged 1 commit intomainfrom
feat/langgraph-viral-youtube-agent
Mar 11, 2026
Merged

feat: LangGraph viral YouTube Short generator — 4-agent pipeline with BMasterAI telemetry#49
travis-burmaster merged 1 commit intomainfrom
feat/langgraph-viral-youtube-agent

Conversation

@ellucas-creator
Copy link
Collaborator

New Example: langgraph-viral-youtube

Four specialist agents collaborate in a LangGraph StateGraph to produce a complete, production-ready YouTube Shorts package — fully instrumented with BMasterAI telemetry.

Architecture

trend_researcher → hook_writer → script_writer → title_and_tags
                                                        ↓
                                                  quality_gate
                                                   /         \
                                            approved       retry (≤2x)
                                               ↓                ↓
                                              END          hook_writer

Agents

Agent Responsibility
trend_researcher Tavily web search → viral angle + competitor hooks
hook_writer ≤12-word scroll-stopping opening line
script_writer Full 45–60s Hook→Conflict→Build→Payoff→CTA script
title_and_tags Viral title + 10 SEO tags + thumbnail concept
quality_gate Validates package; triggers retry loop (max 2 iterations)

BMasterAI Integration

Every agent node logs structured telemetry:

bm.log_event(EventType.TASK_START, agent_id="hook-writer", metadata={...})
bm.log_event(EventType.LLM_CALL, ...)
bm.log_event(EventType.TASK_COMPLETE, ...)
bm.log_event(EventType.TASK_ERROR, ...)

Output: logs/agents.log + logs/agents.jsonl

Quick Start

pip install -r requirements.txt
cp .env.example .env   # ANTHROPIC_API_KEY + TAVILY_API_KEY
python main.py "AI agents taking over software engineering"

Why LangGraph?

  • 38M+ monthly PyPI downloads — largest framework audience
  • Framework-agnostic (complements the AWS-heavy existing examples)
  • Conditional retry loop demonstrates LangGraph's strengths vs linear pipelines
  • Creative content generation use case — different from research/technical examples

…YouTube Shorts

Four specialist agents collaborate in a LangGraph StateGraph to produce
a complete, production-ready YouTube Shorts package:

Agents:
- trend_researcher  — Tavily web search → viral angle + competitor hooks
- hook_writer       — ≤12-word scroll-stopping opening line
- script_writer     — full 45-60s Hook/Conflict/Build/Payoff/CTA script
- title_and_tags    — viral title + 10 SEO tags + thumbnail concept
- quality_gate      — validates output, triggers retry loop (max 2 iterations)

BMasterAI telemetry:
- TASK_START / LLM_CALL / TASK_COMPLETE / TASK_ERROR on every agent node
- Structured JSONL output to logs/agents.jsonl
- Human-readable log to logs/agents.log

Usage:
  pip install -r requirements.txt
  python main.py "AI agents taking over software engineering"

New deps: langgraph, langchain-anthropic, langchain-core, tavily-python, bmasterai
@claude
Copy link

claude bot commented Mar 11, 2026

Claude finished @ellucas-creator's task —— View job


Code Review: LangGraph Viral YouTube Agent

Overall Assessment ✅

This is a well-structured implementation that effectively demonstrates LangGraph capabilities with BMasterAI telemetry integration. The code is clean, follows Python best practices, and provides a compelling creative content generation use case.

✅ Strengths

Architecture & Design

  • Clean separation of concerns across agents.py, graph.py, state.py, and main.py
  • Smart retry logic with quality gate validation
  • TypedDict state management provides excellent type safety
  • Well-designed agent specialization (trend research → hook → script → title/tags)

BMasterAI Integration

  • Comprehensive telemetry coverage with structured logging to both .log and .jsonl
  • Proper event types used throughout (TASK_START, LLM_CALL, TASK_COMPLETE, TASK_ERROR)
  • Monitoring setup with get_monitor().start_monitoring() in agents.py:22

Code Quality

  • Consistent error handling with try/catch blocks in all agent functions
  • Clear function documentation and type hints
  • Good use of environment variables for API keys
  • Readable CLI interface with interactive prompts

⚠️ Issues & Recommendations

Security Concerns

  • Medium Risk: API keys loaded globally without validation in agents.py:26. Consider adding:
    if not os.environ.get("TAVILY_API_KEY"):
        raise EnvironmentError("TAVILY_API_KEY required")

Error Handling Gaps

  • agents.py:85-86: Dictionary parsing could fail with malformed LLM responses. Add validation:
    lines = {}
    for line in raw.splitlines():
        if ":" in line:
            key, value = line.split(":", 1)
            lines[key.strip()] = value.strip()
  • No handling for Anthropic API rate limits or network timeouts

Performance Considerations

  • Sequential agent execution in graph.py:47-50 - trend research and title/tags could potentially run in parallel since they don't depend on each other
  • No LLM response caching for similar queries

Code Improvements

  • agents.py:241-248: Magic numbers in quality gate validation (15 words, 200 chars, 5 tags) should be constants
  • main.py:64: Hard-coded output filename could conflict in concurrent runs

🧪 Test Coverage

Major Gap: No unit tests provided. Recommend adding:

  • Agent function testing with mock LLM responses
  • State validation testing
  • Error scenario coverage
  • Integration tests for the full pipeline

📋 Missing Features

  • No configuration file for customizing quality thresholds
  • No support for different LLM models/providers
  • Limited observability beyond logs (no metrics dashboard)

🔍 Dependencies

  • All dependencies are well-scoped and use appropriate version ranges
  • BMasterAI integration is correctly implemented per the library patterns

💡 Suggestions

  1. Add input validation for topic length/content
  2. Consider adding structured output validation for LLM responses
  3. Implement configurable retry strategies
  4. Add health checks for external APIs before starting pipeline

Final Verdict: Approve with Minor Fixes

This is a solid implementation that effectively showcases LangGraph + BMasterAI integration. The identified issues are minor and don't prevent the example from functioning correctly. The code demonstrates best practices for multi-agent systems and provides excellent documentation.


@travis-burmaster travis-burmaster merged commit 119e967 into main Mar 11, 2026
2 checks passed
@travis-burmaster travis-burmaster deleted the feat/langgraph-viral-youtube-agent branch March 15, 2026 11:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants