Skip to content

helencute/ai-content-system

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Content System

A comprehensive, modular AI system that transforms news topics into well-researched, multi-format content with automated distribution.

Business Requirements

The AI Content System addresses the need for high-quality, research-backed content creation at scale. This system:

  • Automates Research: Transforms news topics or issues into structured research plans
  • Ensures Credibility: Finds and evaluates high-credibility online sources
  • Creates Data-Rich Content: Generates comprehensive reports with data visualization
  • Maintains Brand Consistency: Uses style transfer models to maintain consistent voice
  • Multi-Platform Publishing: Automatically publishes to WordPress (blogs) and Threads (social)
  • Modular Architecture: Supports extensibility through component-based design
  • Language Model Flexibility: Works with various LLMs (GPT-4o by default, with options for open-source models)

The system leverages Multi-Channel Processing (MCP) protocol and agentic AI to coordinate complex workflows from research to publication, ensuring consistent quality across different content formats and platforms.

Project Structure

ai-content-system/ ├── core/ │ ├── agents/ │ │ ├── research_planner.py # Research plan generation │ │ ├── source_collector.py # Source collection orchestration │ │ └── data_analyzer.py # Data analysis and visualization │ ├── models/ │ │ ├── source_models.py # Source and credibility data models │ │ └── research_models.py # Research plan data models │ ├── services/ │ │ ├── credibility_service.py # Source credibility evaluation │ │ └── content_extraction.py # Content extraction from sources │ ├── clients/ │ │ ├── search/ │ │ │ ├── init.py # Common interface exports │ │ │ ├── base.py # SearchAPIClient base class │ │ │ ├── google.py # Google Search API client │ │ │ ├── perplexity.py # Perplexity AI API client │ │ │ ├── news.py # News API client │ │ │ ├── academic.py # Semantic Scholar client │ │ │ └── factory.py # Search client factory ├── content_engine/ │ ├── generators/ │ │ ├── blog_generator.py # Blog content generation │ │ ├── thread_generator.py # Thread content generation │ │ └── report_generator.py # Research report generation │ └── style_transfer/ # Style transfer module ├── infrastructure/ │ ├── config_loader.py # Configuration management │ └── dependency_injection.py # Dependency injection container ├── integrations/ # External platform connections ├── scripts/ # Utility and test scripts └── tests/ # Test suite ├── unit/ # Unit tests └── integration/ # Integration tests

Installation

  1. Clone the repository
    git clone https://github.com/yourusername/ai-content-system.git
    cd ai-content-system
  2. Create a virtual environment (optional)
    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies
    pip install -r requirements.txt
  4. Configure environment variables
    cp .env.example .env
  5. Run the application
    python main.py

API Keys Setup

The system requires several API keys for full functionality:

  1. OpenAI API Key: Get from OpenAI Platform
  2. Google Search API:
  3. Perplexity API Key: Sign up at Perplexity API
  4. News API Key: Register at NewsAPI.org
  5. Semantic Scholar API Key: Request from Semantic Scholar API

Copy .env.example to .env and add your API keys.

You don't need to write code that installs packages programmatically within your application - this is handled externally through pip or your containerization process.

Dependency Management

This project follows these forward-looking dependency principles:

  1. Latest Versions First: We use the latest stable versions of core dependencies
  2. No Downgrades: We find alternatives rather than downgrade dependencies
  3. Isolation: All dependencies are managed in a dedicated virtual environment

Handling Anaconda Conflicts

If you're using Anaconda, you may see dependency warnings from packages like:

  • anaconda-cloud-auth (requiring older pydantic)
  • spyder, numba, scipy (requiring older numpy)

These warnings can be safely ignored when using a dedicated virtual environment, as they refer to global Anaconda packages not used by this project.

Test the research planner with a specific topic

python scripts/test_research_planner.py --topic "Impact of climate change on global food security"

Run unit tests

python -m pytest tests/unit/test_research_planner.py -v

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors