Briefly is a modern command-line application written in Go that transforms RSS feeds and curated articles into intelligent, LinkedIn-ready digests. With v3.1 Hierarchical Summarization, it generates comprehensive digests that synthesize ALL articles while maintaining conciseness, featuring database-driven workflows, theme-based classification, and executive summaries grounded in complete content coverage.
- Hierarchical Summarization (v3.1): Revolutionary two-stage digest generation
- Stage 1: Generate comprehensive cluster narratives from ALL articles in each topic cluster (not just top 3)
- Stage 2: Synthesize cluster narratives into concise executive summary
- Result: Short digests that are grounded in complete content coverage
- No information loss: Every article contributes to the final digest
- Better credibility: Summaries accurately reflect all underlying content
- Database-Driven Digest Generation: Generate digests from classified articles stored in PostgreSQL
- Command:
briefly digest generate --since Nfor date-based digest creation - LLM-powered article summarization with intelligent caching
- K-means clustering for automatic topic grouping
- Theme-based classification with relevance scores
- ON CONFLICT upsert for digest regeneration
- LinkedIn-ready markdown output
- Command:
- Web Digest Viewer: Beautiful, responsive digest viewing experience
- Digest list page with card layout (
/digests) - Digest detail page with full content (
/digests/{id}) - Markdown-rendered executive summaries using marked.js
- Theme-grouped article display with emoji indicators
- Article summaries with relevance scoring
- REST API endpoints: list, detail, latest digest
- Mobile-responsive TailwindCSS design
- PostHog analytics integration
- Digest list page with card layout (
- Theme-Based Classification: LLM-powered article categorization with 10 predefined themes (AI & ML, Cloud & DevOps, Software Engineering, Web Development, Data Engineering, Security, Programming Languages, Mobile, Open Source, Product & Startup)
- Manual URL Submission: Submit individual articles via CLI, REST API, or web form for one-off processing
- LangFuse Observability: Track all LLM API calls with cost estimation, token counting, and performance metrics
- PostHog Analytics: Product analytics integrated across CLI, API, and web pages for usage insights
- Theme Management: Full CRUD operations via CLI and REST API for managing classification themes
- Web Interface: HTML pages for theme management and URL submission with integrated PostHog tracking
- Status Tracking: Complete workflow for manual URLs (pending β processing β processed/failed)
- Intelligent Content Filtering: Advanced relevance scoring automatically filters articles by importance, keeping only high-value content (π₯ Critical β₯0.8, β Important 0.6-0.8, π‘ Optional <0.6)
- Word Count Optimization: Generates precise 200-500 word digests with real-time word counting and read time estimates ("π 342 words β’ β±οΈ 2m read")
- Unified Relevance Architecture: Reusable scoring system serves digest filtering, research ranking, and interactive browsing with context-aware weight profiles
- Actionable Recommendations: "β‘ Try This Week" section with 2-3 specific, implementable actions (5-8 words each) like "Test the mentioned API in a small project this week"
- Smart Theme Detection: Automatically infers digest themes (AI, security, performance, etc.) for targeted relevance scoring
- Configurable Filtering: Command-line control with
--min-relevance,--max-words, and--enable-filteringflags
- Smart Content Processing: Reads URLs from Markdown files and intelligently extracts main article content
- AI-Powered Summarization: Uses Gemini API to generate concise, meaningful summaries with word-based limits (15-25 words per article)
- Multiple Output Formats: Choose from brief (200 words), standard (400 words), detailed, newsletter (500 words), or HTML email formats
- AI-Powered Insights: Comprehensive insights automatically integrated into every digest:
- Sentiment Analysis: Emotional tone analysis with emoji indicators (π positive, π negative, π€ neutral)
- Alert Monitoring: Configurable alert conditions with automatic evaluation and notifications
- Trend Analysis: Week-over-week comparison of topics and themes when historical data is available
- Deep Research: AI-driven research suggestions and topic exploration with configurable depth
- Prompt Corner: Newsletter format includes AI-generated prompts based on digest content that readers can copy and use with any LLM (ChatGPT, Gemini, Claude, etc.)
- Personal Commentary: Add your own "My Take" to any digest with AI-powered regeneration that integrates your voice throughout the entire content
- Intelligent Caching: SQLite-based caching system to avoid re-processing articles and summaries
- Cost Estimation: Dry-run mode to estimate API costs before processing
- Template System: Customizable output formats with built-in templates
- Terminal UI: Interactive TUI for browsing articles and summaries
- Modern CLI: Built with Cobra for intuitive command-line experience
- Structured Logging: Comprehensive logging with multiple output formats
- Configuration Management: Flexible configuration via files, environment variables, or flags
- Multi-Channel Output (v1.0): Rich output options for different platforms:
- HTML Email: Responsive email templates with inline CSS for maximum compatibility
- Slack/Discord: Platform-optimized messages with webhooks, sentiment emojis, and rich formatting
- Text-to-Speech: Generate MP3 audio files using OpenAI TTS, ElevenLabs, or other providers
- Go (version 1.23 or higher recommended)
- A Gemini API Key (required for core functionality)
- OpenAI API Key (for TTS audio generation)
- ElevenLabs API Key (for premium TTS voices)
- Slack/Discord webhook URLs (for messaging integration)
-
Clone the Repository:
git clone https://github.com/rcliao/briefly.git cd briefly -
Install Dependencies:
go mod tidy
-
Build the Application:
# Build for current platform go build -o briefly ./cmd/briefly # Or build and install to $GOPATH/bin go install ./cmd/briefly
Check the Releases page for pre-built binaries for your platform.
Theme Management:
# List all themes
briefly theme list
# Add a new theme
briefly theme add "Blockchain" -d "Blockchain and cryptocurrency" -k "blockchain,crypto,web3"
# Enable/disable a theme
briefly theme enable <theme-id>
briefly theme disable <theme-id>
# Update theme keywords
briefly theme update <theme-id> -k "blockchain,crypto,web3,defi"
# Delete a theme
briefly theme delete <theme-id>Manual URL Submission:
# Submit URLs for processing
briefly manual-url add https://example.com/article
# Submit multiple URLs at once
briefly manual-url add https://example.com/article1 https://example.com/article2
# List all submitted URLs
briefly manual-url list
# Filter by status
briefly manual-url list --status pending
briefly manual-url list --status failed
# Retry a failed URL
briefly manual-url retry <url-id>
# Delete a submitted URL
briefly manual-url delete <url-id>Digest Generation (Phase 1):
# Generate digest from classified articles in database
briefly digest generate --since 7 # Last 7 days
# Filter by theme
briefly digest generate --theme "AI & Machine Learning" --since 7
# Generates:
# - LLM-powered article summaries (cached when available)
# - Executive summary from top articles
# - Theme-grouped LinkedIn-ready markdown
# - Saves to database and digests/ directoryWeb Server (Phase 0 + Phase 1):
# Start web server (includes digest viewer, theme management, URL submission)
briefly serve
# Access web interfaces:
# - http://localhost:8080/ - Homepage
# - http://localhost:8080/digests - View all digests (Phase 1)
# - http://localhost:8080/digests/{id} - View digest detail (Phase 1)
# - http://localhost:8080/themes - Theme management
# - http://localhost:8080/submit - URL submission
# REST API endpoints:
# - GET /api/digests - List digests (Phase 1)
# - GET /api/digests/{id} - Get digest detail (Phase 1)
# - GET /api/digests/latest - Get latest digest (Phase 1)
# - GET /api/themes - Theme REST API
# - POST /api/themes - Create theme
# - GET /api/manual-urls - Manual URL REST APICopy the example configuration files and customize them:
# Copy configuration templates
cp .env.example .env
cp .briefly.yaml.example .briefly.yaml
# Edit with your API keys
nano .envπ For detailed configuration guide, see CONFIGURATION.md
Get your API key from Google AI Studio and set it:
# In .env file (recommended)
GEMINI_API_KEY=your-gemini-api-key-hereFor research features, configure a search provider:
# Google Custom Search (recommended)
GOOGLE_CSE_API_KEY=your-google-api-key
GOOGLE_CSE_ID=your-search-engine-id
# Or SerpAPI (premium)
SERPAPI_KEY=your-serpapi-key- Environment Variables (
.envfile) - YAML Configuration (
.briefly.yaml) - Command-line flags
Examples:
.env file:
# Required
GEMINI_API_KEY=your-gemini-key
# Optional - Search Providers
GOOGLE_CSE_API_KEY=your-google-key
GOOGLE_CSE_ID=your-search-engine-id
# Optional - Phase 0 Observability
LANGFUSE_PUBLIC_KEY=pk_...
LANGFUSE_SECRET_KEY=sk_...
LANGFUSE_HOST=https://cloud.langfuse.com
POSTHOG_API_KEY=phc_...
POSTHOG_HOST=https://app.posthog.com
# Optional - Multi-channel output
SLACK_WEBHOOK_URL=https://hooks.slack.com/...
OPENAI_API_KEY=your-openai-key.briefly.yaml file:
gemini:
model: "gemini-1.5-pro"
output:
directory: "my-digests"
research:
default_provider: "google"
deep_research:
max_sources: 30
# Phase 0: Observability (optional)
observability:
langfuse:
enabled: true
public_key: "pk_..."
secret_key: "sk_..."
host: "https://cloud.langfuse.com"
posthog:
enabled: true
api_key: "phc_..."
host: "https://app.posthog.com"Configuration is loaded in the following order (later sources override earlier ones):
- Default values
- Configuration file (
.briefly.yaml) - Environment variables
- Command-line flags
Briefly uses a modern CLI interface with subcommands. Here's the recommended workflow:
# 1. Add RSS/Atom feeds
briefly feed add https://hnrss.org/newest
briefly feed add https://blog.golang.org/feed.atom
# 2. Aggregate news (run daily via cron)
briefly aggregate --since 24 # Fetches articles from last 24 hours
# 3. Generate weekly digest with hierarchical summarization
briefly digest generate --since 7 # Last 7 daysUsing Database (Recommended - with Hierarchical Summarization):
# Basic usage - generates digest from classified articles in database
briefly digest generate --since 7
# Specify custom output directory
briefly digest generate --since 7 --output ./my-digests
# Filter by specific theme
briefly digest generate --theme "AI & Machine Learning" --since 7
# View recent digests
briefly digest list --limit 20
# Show specific digest
briefly digest show <digest-id># Add RSS/Atom feeds
briefly feed add https://example.com/feed.xml
# List all feeds
briefly feed list
# Remove a feed
briefly feed remove <feed-id># Aggregate articles from all feeds (last 24 hours)
briefly aggregate --since 24
# Aggregate with theme classification
briefly aggregate --since 24 --themes# Get quick summary of a single article
briefly read https://example.com/article
# Force fresh fetch (bypass cache)
briefly read --no-cache https://example.com/article# View cache statistics
briefly cache stats
# Clear all cached data
briefly cache clear --confirm# Start web server
briefly serve
# Access:
# - http://localhost:8080/digests - View all digests
# - http://localhost:8080/themes - Theme management
# - http://localhost:8080/submit - Submit URLsBriefly uses a revolutionary two-stage hierarchical approach to generate digests that are both concise and comprehensive:
For each topic cluster discovered by K-means clustering:
- Collect ALL articles in the cluster (not just top 3)
- Generate a comprehensive narrative (2-3 paragraphs) synthesizing all articles
- Extract key themes and maintain article citations
- Result: Each cluster has a cohesive summary covering all related articles
From the cluster narratives:
- Synthesize cluster narratives into a concise executive summary
- Show connections between different clusters
- Include citations to specific articles using
[1][2][3]format - Result: Short, readable digest grounded in complete content coverage
- β No Information Loss: Every article contributes to the final digest
- β Better Credibility: Summaries accurately reflect all content
- β Maintains Conciseness: Executive summary stays short by synthesizing clusters, not all individual articles
- β Natural Flow: Cluster narratives create coherent story arcs
#### Text-to-Speech Audio
```bash
# Generate MP3 using OpenAI TTS
briefly generate-tts input/links.md --provider openai --voice alloy
# Generate using ElevenLabs
briefly generate-tts input/links.md --provider elevenlabs --voice Rachel
# Customize audio generation
briefly generate-tts input/links.md \
--provider openai \
--voice nova \
--speed 1.2 \
--max-articles 5 \
--output audio/
# Available providers:
# - openai: High-quality voices (alloy, echo, fable, onyx, nova, shimmer)
# - elevenlabs: Premium natural voices (Rachel, Domi, Bella, Antoni, Arnold)
# - mock: For testing (creates text file instead of audio)
Launch an interactive TUI to browse articles and summaries:
briefly tuiThe newsletter format includes a special "Prompt Corner" section that automatically generates interesting prompts based on the digest content. These prompts are designed to be copied and pasted into any LLM (ChatGPT, Gemini, Claude, etc.) for further exploration of the topics covered.
Example Prompt Corner Output:
## π― Prompt Corner
Here are some prompts inspired by today's digest:
"Act as a senior software engineer. I'm trying to refactor a legacy section of Python code. Using the capabilities of a hypothetical 'Claude Opus 4' coding model with access to the filesystem and web search, propose a refactoring plan, including justifications and potential risks."
This prompt simulates using advanced AI coding features for real-world refactoring problems.
"I have a list of small bug fixes for a Node.js application. As GitHub Copilot Coding Agent, suggest a prioritized order for these tasks, outlining the approach and estimated time for each."
This prompt leverages AI task delegation capabilities for project management.
The prompts are:
- Contextual: Directly inspired by the articles in your digest
- Practical: Ready to use for real development scenarios
- Portable: Work with any LLM platform
- Educational: Include explanations of what each prompt accomplishes
Global Flags:
--config: Specify a configuration file
Digest Command Flags:
--output, -o: Output directory for digest files (default: "digests")--format, -f: Digest format: brief, standard, detailed, newsletter (default: "standard")--dry-run: Estimate costs without making API calls--min-relevance: Minimum relevance threshold for article inclusion (0.0-1.0, default: 0.6)--max-words: Maximum words for entire digest (0 for template default)--enable-filtering: Enable relevance-based content filtering (default: true)
# Basic digest generation
briefly digest input/weekly-links.md
# Newsletter format with custom output directory
briefly digest --format newsletter --output ./newsletters input/links.md
# Cost estimation before processing
briefly digest --dry-run input/expensive-links.md
# Using environment variable for API key
export GEMINI_API_KEY="your_key_here"
briefly digest input/links.md
# Complete workflow with AI-powered personal commentary
briefly digest input/weekly-links.md # Generate digest
briefly my-take list # See available digests
briefly my-take add 1234abcd "Great insights this week!" # Add your perspective
briefly my-take regenerate 1234abcd # AI regenerates entire digest with your voice integrated throughout
# AI-powered insights and research workflow
briefly digest input/weekly-links.md # Generate digest with automatic insights
briefly insights alerts list # View current alert configurations
briefly insights alerts add --keyword "AI" --priority high # Add new alert condition
briefly research --topic "AI development trends" --depth 2 # Deep research on emerging topicsEvery digest automatically includes a comprehensive "AI-Powered Insights" section with:
- π Sentiment Analysis: Emotional tone analysis with emoji indicators
- π¨ Alert Monitoring: Configurable alert conditions and notifications
- π Trend Analysis: Week-over-week topic and theme comparison
- π Research Suggestions: AI-generated queries for deeper topic exploration
# Alert Management
briefly insights alerts list # List all configured alerts
briefly insights alerts add --keyword "security" --priority high # Add keyword alert
briefly insights alerts add --topic "AI" --threshold 3 # Add topic frequency alert
briefly insights alerts remove <alert-id> # Remove specific alert
# Trend Analysis
briefly insights trends # Show recent trend analysis
briefly insights trends --days 14 # Trends over specific period
briefly insights trends --topic "AI" # Trends for specific topic
# Deep Research
briefly research --topic "machine learning" --depth 2 # Research with 2 iterations
briefly research --topic "cybersecurity" --depth 3 --max-results 10 # Detailed research
briefly research --list # Show recent research sessionsThe deep research feature provides AI-driven topic exploration:
- AI Query Generation: Gemini generates relevant search queries for your topic
- Iterative Research: Configurable depth for multi-level topic exploration
- Source Discovery: Finds and processes additional relevant sources
- Integration: Research results can be integrated into future digests
- Mock Search Provider: Currently uses a mock search provider for demonstration
Example Research Session:
briefly research --topic "AI coding assistants" --depth 2
# Output:
# π Starting Deep Research Session
# Topic: AI coding assistants
# Depth: 2 iterations
#
# Iteration 1: Generated 3 search queries
# - "best AI coding assistants 2025 comparison"
# - "GitHub Copilot vs ChatGPT vs Claude coding"
# - "AI pair programming tools developer productivity"
#
# Iteration 2: Generated 3 additional queries
# - "AI code completion accuracy benchmarks"
# - "enterprise AI coding tools integration"
# - "future of AI-assisted software development"
#
# Research completed. Found 6 relevant sources.
# Results stored and can be included in future digests.Input files should be Markdown files containing URLs. Briefly will extract all HTTP/HTTPS URLs found anywhere in the file.
---
date: 2025-05-30
title: "Weekly Tech Links"
---
# Interesting Articles This Week
Here are some articles I found interesting:
- https://example.com/article-1
- https://news.site.com/important-update
- Check this out: https://blog.example.org/research-paper
## AI and Development
- [Claude 4 Release](https://anthropic.com/news/claude-4)
- https://zed.dev/blog/fastest-ai-code-editor
Some inline links like https://github.com/project/repo are also extracted.The application will automatically extract all URLs regardless of their formatting (plain text, markdown links, inline, etc.).
- URL Extraction: Parses the input Markdown file to find all HTTP/HTTPS URLs
- Content Fetching: Downloads and extracts main content from each URL using intelligent HTML parsing
- Smart Caching: Checks cache for previously processed articles to avoid redundant API calls
- Content Cleaning: Removes boilerplate content (navigation, ads, etc.) to focus on main article text
- AI Summarization: Uses Gemini API to generate word-limited summaries (15-25 words per article)
- π― v2.0 Relevance Filtering:
- Theme Detection: Automatically infers digest theme from article titles and content
- Relevance Scoring: Uses KeywordScorer to evaluate content relevance with configurable weights
- Quality Filtering: Removes low-quality content (short articles, spam domains, missing titles)
- Threshold Filtering: Keeps only articles meeting minimum relevance score (default 0.6)
- Word Budget Management: Prioritizes high-relevance content when approaching word limits
- AI-Powered Insights Generation: Automatically analyzes filtered content for:
- Sentiment Analysis: Determines emotional tone and assigns appropriate emoji indicators
- Alert Evaluation: Checks configured alert conditions against article content and topics
- Trend Detection: Compares current topics with historical data when available
- Research Suggestions: Generates AI-driven research queries for deeper topic exploration
- π― v2.0 Actionable Recommendations: Generates "β‘ Try This Week" section with 2-3 specific, technology-aware action items
- Template Processing: Applies word-optimized format templates with integrated insights and recommendations
- Word Count Optimization: Ensures output meets target word limits (200-500 words) with read time estimates
- Final Digest Generation: Creates cohesive, scannable digest with proper citations and comprehensive sections
- Output: Saves the final digest as a Markdown file with word count statistics and filtering results
Every digest automatically includes a comprehensive "AI-Powered Insights" section featuring:
-
Sentiment Analysis:
- Analyzes the emotional tone of each article using AI
- Displays sentiment with emoji indicators (π positive, π negative, π€ neutral/mixed)
- Provides overall digest sentiment summary
-
Alert Monitoring:
- Evaluates configurable alert conditions against article content
- Triggers notifications for high-priority topics or keywords
- Displays triggered alerts with context and priority levels
-
Trend Analysis:
- Compares current digest topics with historical data when available
- Identifies emerging themes and topic frequency changes
- Provides week-over-week trend insights
-
Deep Research Suggestions:
- AI generates relevant research queries based on digest content
- Provides suggestions for deeper exploration of covered topics
- Can automatically execute research with configurable depth using
briefly researchcommand
- Personal Perspective Storage: Your "my take" is stored in the local database linked to the specific digest
- Content Retrieval: System retrieves the original digest content and your personal take
- AI-Powered Rewriting: Gemini LLM receives sophisticated prompts to completely rewrite the digest incorporating your voice naturally throughout
- Cohesive Integration: Your perspective becomes part of the narrative flow rather than a separate section
- Timestamped Output: Creates a new file with
_with_my_take_suffix while preserving the original
- Caching: Articles and summaries are cached to avoid re-processing
- Content Extraction: Advanced HTML parsing focuses on main article content
- Cost Estimation: Dry-run mode provides cost estimates before processing
- Error Handling: Graceful handling of failed URLs with detailed logging
- Multiple Formats: Choose from different digest styles for various use cases
- AI-Powered Insights: Automatic sentiment analysis, alert monitoring, trend detection, and research suggestions
- Alert System: Configurable conditions for monitoring specific topics, keywords, or content patterns
- Research Integration: AI-driven deep research capabilities with iterative topic exploration
Create a .briefly.yaml configuration file for persistent settings:
# Gemini AI Configuration
gemini:
api_key: "" # Or use GEMINI_API_KEY environment variable
model: "gemini-2.5-flash-preview-05-20"
# Output Configuration
output:
directory: "digests"
# Future configuration options can be added here
# cache:
# enabled: true
# ttl: "24h"# Run from source during development
go run ./cmd/briefly digest input/test-links.md
# Run tests
go test ./...
# Build for multiple platforms
GOOS=linux GOARCH=amd64 go build -o briefly-linux-amd64 ./cmd/briefly
GOOS=windows GOARCH=amd64 go build -o briefly-windows-amd64.exe ./cmd/briefly
GOOS=darwin GOARCH=amd64 go build -o briefly-darwin-amd64 ./cmd/brieflyBriefly includes built-in cost estimation to help manage Gemini API usage:
# Estimate costs before processing
briefly digest --dry-run input/large-link-list.md
# Example output:
# Cost Estimation for Digest Generation
# =====================================
# Articles to process: 25
# Estimated tokens per article: ~2000
# Total estimated input tokens: ~50,000
# Estimated output tokens: ~5,000
#
# Estimated costs (USD):
# - Input tokens: $0.025
# - Output tokens: $0.015
# - Total estimated cost: $0.040Common Issues:
- API Key not found: Ensure
GEMINI_API_KEYis set or configured in.briefly.yaml - Permission denied: Make sure the output directory is writable
- Network timeouts: Some websites may be slow or block requests
- Cache issues: Clear cache with
briefly cache clear --confirm
Debug Logging:
The application provides detailed logging. Check logs for specific error messages when articles fail to process.
briefly/
βββ cmd/
β βββ briefly/ # Main application entry point
β β βββ main.go
β βββ cmd/ # CLI commands and configuration
β β βββ root.go # Cobra CLI setup and command definitions
β βββ main.go # Alternative entry point
βββ internal/ # Internal packages
β βββ alerts/ # Alert monitoring and evaluation system
β βββ clustering/ # Topic clustering and analysis
β βββ core/ # Core data structures (Article, Summary, etc.)
β βββ cost/ # Cost estimation functionality
β βββ feeds/ # RSS feed processing (future feature)
β βββ fetch/ # URL fetching and content extraction
β βββ llm/ # LLM client abstraction
β βββ logger/ # Structured logging setup
β βββ relevance/ # π― v2.0 Unified relevance scoring architecture
β βββ render/ # Digest rendering and output
β βββ research/ # Deep research and AI query generation
β βββ sentiment/ # Sentiment analysis functionality
β βββ store/ # SQLite caching system
β βββ templates/ # Word-optimized digest format templates
β βββ trends/ # Trend analysis and historical comparison
β βββ tui/ # Terminal user interface
βββ llmclient/ # Legacy Gemini client (being phased out)
β βββ gemini_client.go
βββ input/ # Example input files
βββ digests/ # Generated digest outputs
βββ temp_content/ # Cached article content
βββ docs/ # Documentation
βββ .env # Environment variables (local)
βββ .briefly.yaml # Configuration file
βββ go.mod # Go module definition
βββ go.sum # Dependency checksums
βββ README.md # This file
cmd/briefly/main.go: Application entry pointcmd/handlers/root_simplified.go: CLI command definitions and routingcmd/handlers/theme.go: π Phase 0 - Theme management CLIcmd/handlers/manual_url.go: π Phase 0 - Manual URL management CLIinternal/core/: Core data structures (Article, Summary, Theme, ManualURL)internal/fetch/: Web scraping and content extractioninternal/llm/: AI/LLM integration layerinternal/llm/traced_client.go: π Phase 0 - Observability-wrapped LLM clientinternal/themes/: π Phase 0 - LLM-based theme classificationinternal/observability/: π Phase 0 - LangFuse and PostHog clientsinternal/persistence/: PostgreSQL storage with theme and manual URL repositoriesinternal/server/: HTTP server with theme and manual URL endpointsinternal/store/: SQLite-based caching systeminternal/templates/: Output format templatesinternal/tui/: Interactive terminal interfaceinternal/alerts/: Alert monitoring and evaluation systeminternal/relevance/: π― v2.0 Unified relevance scoring system with interfaces, keyword scorer, and filtering logicinternal/sentiment/: Sentiment analysis functionalityinternal/trends/: Trend analysis and historical comparisoninternal/research/: Deep research and AI query generationinternal/clustering/: Topic clustering and analysis
See docs/executions/2025-10-31.md for the current implementation roadmap.
Current Status:
- β Phase 0: Observability & Manual Curation - Complete
- π― Phase 1: Digest Generation & Web Viewer - 85% Complete
- β Theme-based article classification with LLM (10 default themes)
- β Manual URL submission system (CLI, API, Web)
- β LangFuse integration for LLM observability
- β PostHog integration for product analytics
- β Theme management with full CRUD operations
- β Web pages with PostHog tracking
- β TracedClient pattern for observability-wrapped LLM calls
- β Database migrations for themes, manual URLs, and article themes
β Completed Features:
- β
Digest Generation Command -
briefly digest generate --since N- Generate digests from database articles by date/theme
- LLM-powered article summarization with caching
- Executive summary generation from top articles
- Theme-based grouping with relevance scores
- Save to PostgreSQL with ON CONFLICT upsert
- LinkedIn-ready markdown output
- β
Web Digest Viewer
- REST API: GET /api/digests (list, detail, latest)
- HTML pages: /digests (list), /digests/{id} (detail)
- Markdown rendering for executive summaries
- Theme-grouped article display
- Mobile-responsive TailwindCSS design
β³ Remaining:
- RSS feed aggregation with inline theme classification
- Scheduled digest generation automation
- Implement full CRUD for articles, digests, feeds
- Add pagination, filtering, search
- Populate database statistics in status endpoint
- Theme-based filtering for all endpoints
- Dockerfile and containerization
- Deploy to Railway/Fly.io
- Production database setup and migrations
- Monitoring and alerting
v1.0 Multi-Channel Features (Production Ready):
- β HTML email output with responsive templates
- β Slack/Discord integration with webhook support
- β Text-to-Speech (TTS) MP3 generation with multiple providers
- β AI banner image generation with DALL-E integration