β If you find this project useful, please star the repo! It helps others discover the tool and motivates continued development.
Transform compressed data dumps into browsable HTML archives with flexible deployment options. Redd-Archiver supports offline browsing via sorted index pages OR full-text search with Docker deployment. Features mobile-first design, multi-platform support, and enterprise-grade performance with PostgreSQL full-text indexing.
Supported Platforms:
| Platform | Format | Status | Available Posts |
|---|---|---|---|
| .zst JSON Lines (Pushshift) | β Full support | 2.38B posts (40,029 subreddits, through Dec 31 2024) | |
| Voat | SQL dumps | β Full support | 3.81M posts, 24.1M comments (22,637 subverses, complete archive) |
| Ruqqus | .7z JSON Lines | β Full support | 500K posts (6,217 guilds, complete archive) |
Tracked content: 2.384 billion posts across 68,883 communities (Reddit full Pushshift dataset through Dec 31 2024, Voat/Ruqqus complete archives)
Version 1.0 features multi-platform archiving, REST API with 30+ endpoints, MCP server for AI integration, and PostgreSQL-backed architecture for large-scale processing.
Archive internet history before it disappears - Deploy in 2 minutes, no domain required.
Try the live demo: Browse Example Archive β
β QUICKSTART.md - Step-by-step deployment:
- 2 min: Tor hidden service (no domain, no port forwarding, works behind CGNAT)
- 5 min: Local testing (HTTP on localhost)
- 15 min: Production HTTPS (automated Let's Encrypt)
Why now? Communities get banned, platforms shut down, discussions vanish. Start preserving today.
β First time here? QUICKSTART.md - Deploy in 2-15 minutes
β Quick answers? FAQ - Common questions answered in 30 seconds
β Need help? Troubleshooting - Fix common issues
β Using the API? API Reference - 30+ REST endpoints
β How it works? Architecture - Technical deep-dive
β Deployment guides:
- Tor Hidden Service - .onion setup (2 min, no domain needed)
- HTTPS Production - Let's Encrypt SSL (15 min)
- Static Hosting - GitHub/Codeberg Pages (browse-only)
- Docker Reference - Complete Docker guide
β Advanced:
- MCP Server - AI integration (Claude Desktop/Code)
- Scanner Tools - Data discovery utilities
- Registry Setup - Instance leaderboard
Archive content from multiple link aggregator platforms in a single unified archive:
| Platform | Format | CLI Flag | URL Prefix |
|---|---|---|---|
| .zst JSON Lines | --subreddit |
/r/ |
|
| Voat | SQL dumps | --subverse |
/v/ |
| Ruqqus | .7z JSON Lines | --guild |
/g/ |
- Automatic Detection: Platform auto-detected from file extensions
- Unified Search: PostgreSQL FTS searches across all platforms
- Mixed Archives: Combine Reddit, Voat, and Ruqqus in single archive
29 MCP tools auto-generated from OpenAPI for AI assistants:
- Full Archive Access: Query posts, comments, users, search via Claude Desktop or Claude Code
- Token Overflow Prevention: Built-in LLM guidance with field selection and truncation
- 5 MCP Resources: Instant access to stats, top posts, subreddits, search help
- Claude Code Ready: Copy-paste configuration for immediate use
{
"mcpServers": {
"reddarchiver": {
"command": "uv",
"args": ["--directory", "/path/to/mcp_server", "run", "python", "server.py"],
"env": { "REDDARCHIVER_API_URL": "http://localhost:5000" }
}
}
}See MCP Server Documentation for complete setup guide.
- π± Mobile-First Design: Responsive layout optimized for all devices with touch-friendly navigation
- π Advanced Search System (Server Required): PostgreSQL full-text search optimized for Tor network. Search by keywords, subreddit, author, date, score. Requires Docker deployment - offline browsing uses sorted index pages.
- β‘ JavaScript Free: Complete functionality without JS, pure CSS interactions
- π¨ Theme Support: Built-in light/dark theme toggle with CSS-only implementation
- βΏ Accessibility: WCAG compliant with keyboard navigation and screen reader support
- π Performance: Optimized CSS (29KB), designed for low-bandwidth networks
- ποΈ Modular Architecture: 18 specialized modules for maintainability and extensibility
- ποΈ PostgreSQL Backend: Large-scale processing with constant memory usage regardless of dataset size
- β‘ Lightning-Fast Search: PostgreSQL full-text search with GIN indexing
- π REST API v1: 30+ endpoints with MCP/AI optimization for programmatic access to posts, comments, users, statistics, search, aggregations, and exports
- π§ Tor-Optimized: Zero JavaScript, server-side search, no external dependencies
- π Rich Statistics: Comprehensive analytics dashboard with file size tracking
- π SEO Optimized: Complete meta tags, XML sitemaps, and structured data
- πΎ Streaming Processing: Memory-efficient with automatic resume capability
- π Progress Tracking: Real-time transfer rates, ETAs, and database metrics
- π Instance Registry: Leaderboard system with completeness-weighted scoring for distributed archives
- π Local/Homelab: HTTP on localhost or LAN (2 commands)
- π Production HTTPS: Automated Let's Encrypt setup (5 minutes)
- π§ Tor Hidden Service: .onion access, zero networking config (2 minutes)
- π Dual-Mode: HTTPS + Tor simultaneously
- π Static Hosting: GitHub/Codeberg Pages for small archives (browse-only, no search)
Main landing page showing archive overview with statistics for 9,592 posts across Reddit, Voat, and Ruqqus. Features customizable branding (site name, project URL), responsive cards, activity metrics, and content statistics. (Works offline)
Post listing with sorting options (score, comments, date), pagination, and badge coloring. Includes navigation and theme toggle. (Works offline - sorted by score/comments/date)
Individual post displaying nested comment threads with collapsible UI, user flair, and timestamps. Comments include anchor links for direct navigation from user pages. (Works offline)
Fully optimized for mobile devices with touch-friendly navigation and responsive layout.
PostgreSQL full-text search with Google-style operators. Supports filtering by subreddit, author, date range, and score. (Requires Docker deployment)
Search results with highlighted excerpts using PostgreSQL ts_headline(). Sub-second response times with GIN indexing. (Server-based, Tor-compatible)
Sample Archive: Multi-platform archive featuring programming and technology communities from Reddit, Voat, and Ruqqus Β· See all screenshots β
Prerequisites: Python 3.7+, PostgreSQL 12+, 4GB+ RAM
Quick Install (Docker):
git clone https://github.com/19-84/redd-archiver.git
cd redd-archiver
# Create required directories
mkdir -p data output/.postgres-data logs tor-public
# Configure environment (IMPORTANT: change passwords!)
cp .env.example .env
nano .env # Edit POSTGRES_PASSWORD and DATABASE_URL
# Start services
docker compose up -d
# Generate archive (after downloading .zst files to data/)
python reddarc.py data/ \
--subreddit privacy \
--comments-file data/privacy_comments.zst \
--submissions-file data/privacy_submissions.zst \
--output output/Detailed installation procedures (Docker, Ubuntu/Debian, macOS, Windows WSL2):
- Installation Guide - Platform-specific setup and troubleshooting
Quick workflow: Download data β Run archive generator β Deploy
# Generate archive (assumes .zst files in data/ directory)
python reddarc.py data/ \
--subreddit privacy \
--comments-file data/privacy_comments.zst \
--submissions-file data/privacy_submissions.zst \
--output output/
# Deploy with Docker
docker compose up -d
# Access at http://localhost- Reddit:
.zstfiles from Pushshift (3.28TB, 2.38B posts) - Voat: SQL dumps from Archive.org (15GB, 3.8M posts) - Use pre-split files for 1000x speedup
- Ruqqus:
.7zfiles from Archive.org (752MB, 500K posts)
- QUICKSTART.md - Step-by-step deployment (2-15 min)
- Scanner Tools - Identify high-priority communities
- Installation Guide - Detailed setup procedures
- Deployment Guides - Docker, Tor, Static hosting
CLI options and advanced workflows: See QUICKSTART.md for complete reference.
Environment Variables:
# Required
DATABASE_URL=postgresql://user:pass@host:5432/reddarchiver
# Optional Performance Tuning (auto-detected if not set)
REDDARCHIVER_MAX_DB_CONNECTIONS=8 # Connection pool size
REDDARCHIVER_MAX_PARALLEL_WORKERS=4 # Parallel processing workers
REDDARCHIVER_USER_BATCH_SIZE=2000 # User page batch size
REDDARCHIVER_QUEUE_MAX_BATCHES=10 # Queue backpressure control
REDDARCHIVER_CHECKPOINT_INTERVAL=10 # Progress save frequency
REDDARCHIVER_USER_PAGE_WORKERS=4 # User page generation workersModular PostgreSQL-backed design with 18 specialized HTML modules and multi-platform import support:
Core Components:
reddarc.py- Main CLI entry point with platform auto-detectioncore/- PostgreSQL backend, streaming importers (Reddit/Voat/Ruqqus), HTML generationapi/- REST API v1 with 30+ endpointsmcp_server/- MCP server for AI integration (29 tools)html_modules/- 18 specialized modules (Jinja2 rendering, SEO, statistics, CSS minification)templates_jinja2/- 15 Jinja2 templates with inheritance systemprocessing/- Parallel user processing, batch optimization, statisticsmonitoring/- Performance tracking, auto-tuning, system optimization
Key Features:
- Streaming architecture with constant memory (4GB regardless of dataset size)
- PostgreSQL COPY protocol for 15K+ inserts/sec
- Keyset pagination for O(1) queries
- Resume capability with database checkpoints
- Multi-platform unified search with FTS GIN indexing
Learn more: ARCHITECTURE.md - Complete technical deep-dive
30+ REST API Endpoints for programmatic access with MCP/AI optimization:
- System (5): Health checks, stats, schema, OpenAPI spec
- Posts (13): List, single, comments, context, tree, related, random, aggregate, batch
- Comments (7): List, single, random, aggregate, batch
- Users (8): Profiles, summary, activity, aggregate, batch
- Subreddits (4): List, statistics, summary
- Search (3): Full-text search with Google-style operators, query debugging
AI-Optimized Features: Field selection, truncation controls, export formats (CSV/NDJSON), batch endpoints, context endpoints. Rate limited to 100 req/min.
PostgreSQL Full-Text Search: Lightning-fast GIN-indexed search with relevance ranking, highlighted excerpts, and advanced filters (subreddit, author, date, score). Sub-second results for large datasets.
Instance Registry: Distributed leaderboard system for tracking archive instances. Configure metadata, automate scoring, group teams for coordinated archiving.
Learn more: API Documentation Β· Search Setup Β· Registry Guide
- Studying online discourse and community dynamics
- Analyzing social movements and trends
- Preserving internet culture
- Backing up subreddits before potential removal
- Creating offline-accessible community resources
- Distributing knowledge repositories
- Pattern analysis in deleted/removed content
- User behavior studies
- Content moderation research
Internet content disappears every day. Communities get banned, platforms shut down, and valuable discussions vanish. You can help prevent this.
Don't wait for content to disappear. Download these datasets today:
| Platform | Size | Posts | Download |
|---|---|---|---|
| 3.28TB | 2.38B posts | Academic Torrents Β· Magnet Link | |
| Voat | ~15GB | 3.8M posts | Archive.org β |
| Ruqqus | ~752MB | 500K posts | Archive.org β‘ |
β Voat Performance Tip: Use pre-split files for 1000x faster imports (2-5 min vs 30+ min per subverse)
β‘ Ruqqus: Docker image includes p7zip for automatic .7z decompression
Every mirror matters. Store locally, seed torrents, share with researchers. Be part of the preservation network.
Already running an archive? Register it on our public leaderboard:
- Deploy your instance (Quick Start - 2-15 minutes)
- Submit via Registry Template
- Join coordinated preservation efforts with other teams
Benefits:
- Public visibility and traffic
- Coordinated archiving to avoid duplication
- Team collaboration opportunities
- Leaderboard recognition
π Register Your Instance Now β
Found a new platform dataset? Help expand the archive network:
- Lemmy databases
- Hacker News archives
- Alternative Reddit archives
- Other link aggregator platforms
Why submit?
- Makes data discoverable for other archivists
- Prevents duplicate preservation efforts
- Builds comprehensive multi-platform archive ecosystem
- Tracks data availability before platforms disappear
- Docker Deployment Guide - Complete Docker setup including PostgreSQL, nginx, HTTPS, and Tor
- Tor Deployment Guide - Tor hidden service setup for homelab and privacy deployments
- Static Deployment Guide - GitHub Pages and Codeberg Pages deployment (browse-only, no search)
- Installation Guide - Detailed installation procedures (Docker, Ubuntu/Debian, macOS, Windows WSL2)
- Search Setup - PostgreSQL full-text search configuration and usage
- Performance Guide - Memory usage, storage calculations, and tuning
- Scaling Guide - Horizontal scaling for large archives (multi-instance deployments)
- REST API Documentation - Complete API reference with 30+ endpoints
- MCP Server Documentation - AI integration with Claude Desktop/Claude Code
- Registry Setup Guide - Instance registry configuration
- CONTRIBUTING.md - Development guidelines and contribution procedures
- SECURITY.md - Security policy and vulnerability reporting
- LICENSE - Unlicense (public domain)
We welcome contributions! Please see CONTRIBUTING.md for development guidelines, code structure, and testing procedures.
Key areas for contribution:
- PostgreSQL query optimizations
- Additional export formats
- Enhanced search features
- Documentation improvements
See our modular architecture (18 specialized modules) for easy entry points to contribute.
This is free and unencumbered software released into the public domain. See the LICENSE file (Unlicense) for details.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software for any purpose, commercial or non-commercial, and by any means.
This project leverages public datasets from the following sources:
- Pushshift - Reddit data access and archival infrastructure
- Watchful1's PushshiftDumps - Comprehensive data dump tools and torrent management
- Arctic Shift - Making Reddit data accessible to researchers and the public
- Ruqqus Public Dataset - 752 MB Ruqqus archive (comments and submissions)
- SearchVoat Archive - 16.8 GB Voat.co complete backup
This project builds upon the work of several excellent archival projects:
- reddit-html-archiver by libertysoft3 - Original inspiration and foundation for static HTML generation
- redarc - Self-hosted Reddit archiving with PostgreSQL and full-text search
- red-arch - Static website generator for Reddit subreddit archives
- zst_blocks_format - Efficient block-based compression format for processing large datasets
- GitHub Issues: Report bugs or request features
- GitHub Discussions: Ask questions or share ideas
- Security Issues: Report via GitHub Security Advisories
Redd-Archiver was built by one person over 6 months as a labor of love to preserve internet history before it disappears forever.
This isn't backed by a company or institutionβjust an individual committed to keeping valuable discussions accessible. Your support helps:
- Continue development and bug fixes
- Maintain documentation and support
- Cover infrastructure costs (servers, storage, bandwidth)
- Preserve more data sources and platforms
Every donation, no matter the size, helps keep this preservation effort alive.
bc1q8wpdldnfqt3n9jh2n9qqmhg9awx20hxtz6qdl7
42zJZJCqxyW8xhhWngXHjhYftaTXhPdXd9iJ2cMp9kiGGhKPmtHV746EknriN4TNqYR2e8hoaDwrMLfv7h1wXzizMzhkeQi
Thank you for supporting internet archival efforts! Every contribution helps maintain and improve this project.
This software is provided "as is" under the Unlicense. See LICENSE for details. Users are responsible for compliance with applicable laws and terms of service when processing data.







