Ragbot continues active development alongside RaGenie, its next-generation sibling. Both are open source and share the same data layer (ragbot-data).
| Use Case | Recommendation |
|---|---|
| Quick setup, CLI-focused workflow | Ragbot |
| Need RAG with vector search | Both (Ragbot now has Qdrant RAG) |
| Prefer Streamlit simplicity | Ragbot |
| Need microservices architecture | RaGenie |
| Want both CLI and modern web UI | Use both! |
RaGenie (ragenie.ai) is a modern microservices platform that complements Ragbot:
| Feature | Ragbot (v1) | RaGenie (v2) |
|---|---|---|
| Architecture | Monolithic Streamlit | Microservices (FastAPI + React) |
| Authentication | None | JWT OAuth2 with role-based access |
| Storage | File system + Qdrant | PostgreSQL + MinIO + Qdrant (vectors) |
| RAG | Qdrant vector search | Automatic embeddings with semantic search |
| Scalability | Single container | Horizontal scaling with load balancing |
| Monitoring | None | Prometheus + Grafana dashboards |
| Caching | None | Redis with smart invalidation |
| API | None | RESTful APIs with documentation |
- Automatic Synchronization: Edit markdown files and see changes indexed within 45 seconds
- Advanced RAG: Vector embeddings for semantic search across all your knowledge
- Production Ready: Built-in monitoring, health checks, and backup strategies
- Secure by Default: Authentication, encryption, and access control
- Developer Friendly: Interactive API docs, database migrations, comprehensive testing
Your existing workflow doesn't change:
- ✅ ragbot-data repository remains your source of truth
- ✅ Edit markdown files as you always have
- ✅ Same directory structure (datasets/, instructions/, runbooks/, workspaces/)
- ✅ Git workflow unchanged
- ✅ RaGenie mounts ragbot-data read-only (never modifies your files)
Migration Resources:
- RaGenie Repository: github.com/rajivpant/ragenie
- Integration Guide: RAGENIE_INTEGRATION.md
- Quick Start: RaGenie QUICKSTART.md
Both products are actively developed:
Ragbot:
- ✅ Bug fixes and security updates
- ✅ Compatibility updates for new LLM models
- ✅ New features including RAG capabilities (using Qdrant, same as RaGenie)
- ✅ Continued CLI and Streamlit UI improvements
RaGenie:
- ✅ Modern microservices architecture
- ✅ Advanced RAG with automatic indexing
- ✅ Production-ready deployment features
Choose the product that fits your workflow - or use both!
🤖 Ragbot.AI (formerly named rbot): Rajiv's open source AI augmented brain assistant combines the power of large language models (LLMs) with Retrieval Augmented Generation (RAG).
🚀 Ragbot.AI processes user prompts along with instructions, datasets, and runbooks, enabling context-aware responses. Powered by the latest LLMs including OpenAI's GPT-4o and o-series models, Anthropic's Claude Sonnet 4.5 and Claude Opus 4.5, and Google's Gemini 2.5 series, Ragbot.AI uses RAG, a technique that combines the power of pre-trained dense retrieval and sequence-to-sequence models to generate more factual and informative text.
🧠 Instructions and datasets help Ragbot.AI better understand context, resulting in personalized, more accurate, and relevant responses, surpassing the capabilities of out of the box LLMs.
Developed by Rajiv Pant
Ragbot is developed using Synthesis Engineering (also known as Synthesis Coding)—a systematic approach that combines human architectural expertise with AI-assisted implementation. This methodology ensures that while AI accelerates development velocity, engineers maintain architectural authority, enforce quality standards, and deeply understand every component of the system.
Key principles applied in Ragbot's development:
- Human-defined architecture with AI-accelerated implementation
- Systematic quality assurance regardless of code origin
- Context preservation across development sessions
- Iterative refinement based on real-world usage
Learn more about this approach:
- Synthesis Engineering: The Professional Practice
- The Organizational Framework
- Technical Implementation with Claude Code
Code Contributors & Collaborators
How to Contribute
Your code contributions are welcome! Please read CONTRIBUTING.md for important safety guidelines (especially about not committing personal data), then fork the repository and submit a pull request with your improvements.
Introducing Ragbot: A Personalized AI Assistant
Excerpt from the blog post:
Ragbot.AI: Offering Personalized Assistance Beyond ChatGPT Plus, Bing Chat, and Google Bard Currently Offer
As an AI assistant, I provide a unique level of personalization and adaptability that sets me apart from current implementations of ChatGPT Plus, Bing Chat, and Google Bard. By using folders containing customized dataset files, I can cater to multiple use cases, such as personal life, work, education, and specific projects. This customization enables me to understand and support you in a way that is tailored to your unique needs.
You can create a folder with dataset files that include personal information, family details, travel and food preferences, and more. By using this information, I can function as your personal life assistant, offering AI-powered recommendations and support tailored to your specific context.
Similarly, you can develop another folder containing dataset files related to your work life. These files might include details about your job, industry, colleagues, projects, and other work-related information. With this context, I can help you with various tasks, such as drafting emails, scheduling meetings, conducting research, and more, enhancing your efficiency and organization.
You can also customize me for educational purposes by creating a folder with dataset files containing information about your academic background, subjects of interest, courses, and other educational details. In this role, I can provide personalized educational support, from helping with homework to explaining complex concepts or recommending learning resources.
In addition to the use cases mentioned above, I can be tailored to support you on specific projects. By creating a workspace folder with dataset files containing project-related information, such as objectives, team members, deadlines, and relevant resources, I can assist you throughout the project lifecycle, offering valuable insights and support tailored to each unique project.
My ability to create distinct profiles for different needs using customized dataset files and workspaces sets me apart from ChatGPT Plus, Bing Chat, and Google Bard. This versatility enables me to offer personalized assistance across multiple aspects of your life, ensuring that I can understand and cater to your specific requirements.
Get Ragbot running in 5 minutes:
# 1. Clone this repository
git clone https://github.com/rajivpant/ragbot.git
cd ragbot
# 2. Set up your API keys
cp .env.docker .env
# Edit .env and add at least one API key (OpenAI, Anthropic, or Gemini)
# 3. Get starter templates from ai-knowledge-ragbot
git clone https://github.com/rajivpant/ai-knowledge-ragbot.git ~/ai-knowledge-ragbot
cp -r ~/ai-knowledge-ragbot/source/datasets/templates/ datasets/my-data/
cp ~/ai-knowledge-ragbot/source/instructions/templates/default.md instructions/
# 4. Customize with your information
# Edit the files in datasets/my-data/ with your personal details
# 5. Start Ragbot with Docker
docker-compose up -d
# 6. Access the web interface
open http://localhost:8501If you want to keep your data in a separate directory or private repository:
# 1. Clone Ragbot
git clone https://github.com/rajivpant/ragbot.git
cd ragbot
# 2. Create your data directory
mkdir ~/ragbot-data
# Or clone your private data repo: git clone <your-private-repo> ~/ragbot-data
# 3. Set up Docker override
cp docker-compose.override.example.yml docker-compose.override.yml
# Edit docker-compose.override.yml to point to your data directory
# 4. Organize your data (get templates from ai-knowledge-ragbot)
git clone https://github.com/rajivpant/ai-knowledge-ragbot.git ~/ai-knowledge-ragbot
cp -r ~/ai-knowledge-ragbot/source/datasets/templates/* ~/ragbot-data/datasets/
cp ~/ai-knowledge-ragbot/source/instructions/templates/default.md ~/ragbot-data/instructions/
# 5. Configure API keys
cp .env.docker .env
# Edit .env with your API keys
# 6. Start Ragbot
docker-compose up -d- 📖 Knowledge Base: Get templates and runbooks from ai-knowledge-ragbot
- 🎓 Understand the philosophy: Read docs/DATA_ORGANIZATION.md
- 🐳 Docker deployment: See README-DOCKER.md for deployment guide
- 🤝 Contributing safely: Read CONTRIBUTING.md before contributing
- ⚙️ Detailed setup: Follow the installation guide and configuration guide
Ragbot implements a production-grade, multi-stage RAG pipeline based on research from leading AI systems including Perplexity, ChatGPT, Claude, and Gemini. Unlike simple RAG implementations, Ragbot uses sophisticated techniques proven to significantly improve retrieval accuracy.
Query → Phase 1 → Phase 2 → Phase 3 → Generate → Phase 4 → Response
Foundation Query Hybrid Response Verify with
Intel Retrieval & CRAG Confidence
Four-Phase Pipeline:
| Phase | Description | Key Techniques |
|---|---|---|
| Phase 1 | Foundation | Query preprocessing, full document retrieval, 16K context budget |
| Phase 2 | Query Intelligence | LLM planner, multi-query expansion (5-7 variations), HyDE |
| Phase 3 | Hybrid Retrieval | BM25 + Vector search, Reciprocal Rank Fusion, LLM reranking |
| Phase 4 | Verification | Hallucination detection, confidence scoring, CRAG loop |
Based on benchmarks from Anthropic, Microsoft, and other research:
| Technique | Impact |
|---|---|
| Contextual embeddings | 35% fewer retrieval failures |
| Hybrid search + reranking | 67% fewer retrieval failures |
| Query rewriting (multi-query) | +21 NDCG points |
- Query Preprocessing: Expands contractions ("what's" → "what is"), extracts key terms
- Document Detection: Recognizes "show me my biography" style queries
- Full Document Retrieval: Returns complete documents instead of fragments when appropriate
- Enhanced Embeddings: Includes filename and title in embeddings for better matching
- LLM Query Planner: Analyzes intent, determines retrieval strategy
- Multi-Query Expansion: Generates 5-7 query variations for better recall
- HyDE (Hypothetical Document Embeddings): Generates hypothetical answers for semantic search
- Provider-Agnostic: Uses fast model from same provider as user's selection
- Dual Search: Combines semantic (vector) and lexical (BM25) search
- Reciprocal Rank Fusion: Merges results from both search methods
- LLM Reranking: Scores relevance 0-10, reorders by combined score
- Result: Best of both semantic understanding and exact keyword matching
- Claim Extraction: Identifies factual claims in generated responses
- Evidence Matching: Checks each claim against retrieved context
- Confidence Scoring: 0.0-1.0 score based on claim verification
- CRAG (Corrective RAG): Re-retrieves for low-confidence responses (<0.7)
- Select a workspace in the sidebar
- Click "Index Workspace" in Advanced Settings to build the index (first time only)
- Enable "Enable RAG" checkbox
- Adjust "RAG context tokens" slider to control how much context is retrieved
| Setting | Default | Description |
|---|---|---|
| Enable RAG | On | Toggle RAG-augmented responses |
| RAG context tokens | 16000 | Maximum tokens for retrieved context |
| Confidence threshold | 0.7 | CRAG triggers below this score |
| Embedding model | all-MiniLM-L6-v2 | 384-dimension embeddings |
- Vector Database: Qdrant (local file-based storage at
/app/qdrant_data) - Embedding Model: sentence-transformers
all-MiniLM-L6-v2(80MB, 384 dimensions) - Chunking: ~500 tokens per chunk with 50-token overlap
- Similarity: Cosine distance for semantic matching
For the complete technical architecture, see docs/rag-architecture.md.
Ragbot integrates with the AI Knowledge ecosystem for managing knowledge bases across multiple workspaces.
The ai-knowledge-ragbot repository contains open-source runbooks, templates, and guides that ship with Ragbot:
- Instruction templates - Starter configurations for AI assistants
- Dataset templates - Personal and professional profile templates
- Runbooks - Procedures for content creation, communication, system configuration
- Guides - Reference materials for working with AI
Personal ai-knowledge repos can inherit from ai-knowledge-ragbot to get these shared resources while adding private content.
The AI Knowledge system manages content across multiple workspaces using a three-part architecture:
| Operation | Where | When |
|---|---|---|
Knowledge concatenation (all-knowledge.md) |
CI/CD (GitHub Actions) | Every push to source/ |
| Instruction compilation | Local (ragbot compile) |
When instructions change (rare) |
| RAG indexing | Local (ragbot index) |
When content changes + RAG needed |
Key concept: Edit source/ files directly. Knowledge concatenation is automatic. See docs/compilation-guide.md for details.
ai-knowledge-{workspace}/
├── source/ # Your source files (authoritative)
│ ├── instructions/ # WHO - Identity, persona, rules
│ ├── runbooks/ # HOW - Procedures, workflows
│ └── datasets/ # WHAT - Reference knowledge
├── compiled/ # Auto-generated
│ └── {project}/
│ └── instructions/ # LLM-specific (claude.md, chatgpt.md, gemini.md)
└── all-knowledge.md # Concatenated knowledge (CI/CD via GitHub Actions)
Quick examples:
# Compile instructions for a project
ragbot compile --project {name} --no-llm
# Index workspace for RAG
ragbot index --workspace {name}For detailed setup instructions, see the LLM Project Setup Guide.
Ragbot automatically discovers AI Knowledge repositories by convention:
- Mount your
ai-knowledgeparent directory to/app/ai-knowledge - Ragbot scans for directories matching
ai-knowledge-{workspace} - Each discovered repo provides instructions and knowledge for that workspace
# docker-compose.override.yml
services:
ragbot-web:
volumes:
- ${HOME}/projects/my-projects/ai-knowledge:/app/ai-knowledge:ro
- ./workspaces:/app/workspaces:roCreate workspace.yaml files to customize workspace behavior:
# workspaces/my-project/workspace.yaml
name: My Project
description: Project-specific AI assistant
status: active
type: work
inherits_from:
- personal # Inherit from personal workspace| Content Type | Loading Method | Use Case |
|---|---|---|
| Instructions | Always loaded | Core identity and behavior |
| Datasets | Direct or RAG | Small: direct, Large: RAG |
| Runbooks | RAG retrieval | Retrieved when relevant |
Ragbot.AI supports the latest models from three leading AI providers (as of October 2025):
OpenAI Models:
- o3 Series: o3-mini, o3-pro, o3-deep-research - Most advanced reasoning models
- o1 Series: o1, o1-pro, o1-mini, o1-preview - Advanced reasoning capabilities
- GPT-4o Series: gpt-4o (default), gpt-4o-mini - Latest multimodal flagship models
- GPT-4o Audio: gpt-4o-audio-preview, gpt-4o-mini-audio-preview - Multimodal with audio support
- GPT-4 Turbo: Previous generation model
Anthropic Models:
- Claude 4.5 Sonnet (default): claude-sonnet-4-5 - Latest and most capable
- Claude 4.5 Opus: claude-opus-4-5 - Most powerful reasoning
- Claude 4 Series: claude-4-opus, claude-4-sonnet - Extended context versions
- Claude 3.7 Sonnet: Hybrid reasoning capabilities
- Claude 3.5 Series: claude-3-5-sonnet, claude-3-5-haiku - High performance
- Claude 3 Series: claude-3-opus, claude-3-haiku - Previous generation
Google Gemini Models:
- Gemini 2.5 Series: gemini-2.5-pro, gemini-2.5-flash (default), gemini-2.5-flash-lite
- Gemini 2.0 Series: gemini-2.0-flash, gemini-2.0-flash-lite, gemini-2.0-pro-exp
- Experimental: gemini-2.0-flash-thinking-exp, gemini-exp-1206
All models are configured in engines.yaml with their respective capabilities, token limits, and default settings.
Read the installation guide and the configuration and personaliation guide.
The CLI uses workspaces with RAG (Retrieval-Augmented Generation) and automatically loads LLM-specific instructions based on the model you're using.
ragbot chat [options]
Input Options:
-p, --prompt PROMPT Prompt text
-f, --prompt_file FILE Read prompt from file
-i, --interactive Interactive mode with history
--stdin Read prompt from stdin
Workspace & Knowledge:
-profile NAME Workspace to use (auto-loads instructions and enables RAG)
--rag / --no-rag Enable/disable RAG retrieval (default: enabled)
Model Selection:
-e {openai,anthropic,google} Engine/provider
-m MODEL Model name (or 'flagship' for best)
Custom Instructions:
-c PATH [PATH ...] Explicit instruction files (overrides auto-loading)
-nc Disable all instructions
The recommended way to use the CLI is with workspaces:
# Chat with a workspace - instructions auto-loaded, RAG enabled
ragbot chat -profile personal -p "What are my travel preferences?"
# Use Anthropic Claude (loads claude.md instructions)
ragbot chat -profile personal -e anthropic -p "Summarize my work history"
# Use OpenAI GPT-5.2 (loads chatgpt.md instructions)
ragbot chat -profile personal -e openai -m gpt-5.2 -p "Summarize my work history"
# Use Google Gemini (loads gemini.md instructions)
ragbot chat -profile personal -e google -p "Summarize my work history"The system automatically loads the correct instruction file based on the LLM:
| Engine | Instruction File |
|---|---|
| anthropic | compiled/{workspace}/instructions/claude.md |
| openai | compiled/{workspace}/instructions/chatgpt.md |
compiled/{workspace}/instructions/gemini.md |
Maintain conversation history across multiple prompts:
ragbot chat -profile personal -i
> Tell me about my professional background
Ragbot.AI: [response based on RAG-retrieved knowledge]
> Summarize it in 3 bullet points
Ragbot.AI: [continues with context]
> /save session.json
Conversation saved to ...
> /quitThe following options show the full help output for reference. Note that dataset files (-d) are no longer supported - use workspaces with RAG instead.
$ ragbot chat --help
usage: ragbot chat [-h] [-ls] [-p PROMPT | -f PROMPT_FILE | -i | --stdin]
[-profile PROFILE] [-c [CUSTOM_INSTRUCTIONS ...]] [-nc]
[--rag] [--no-rag]
[-e {openai,anthropic,google}] [-m MODEL] [-t TEMPERATURE]
[-mt MAX_TOKENS] [-l LOAD]
Ragbot.AI is an augmented brain and assistant. Learn more at https://ragbot.ai
options:
-h, --help show this help message and exit
-ls, --list-saved List all the currently saved JSON files.
-p, --prompt The user's input prompt
-f, --prompt_file Read prompt from a file
-i, --interactive Enable interactive mode with conversation history
--stdin Read prompt from stdin
-profile Workspace name (enables RAG and auto-loads instructions)
-c Custom instruction file paths (overrides auto-loading)
-nc Disable custom instructions
--rag Enable RAG retrieval (default)
--no-rag Disable RAG - instructions only
-e {openai,anthropic,google} LLM engine/provider
-m MODEL Model name or 'flagship'
-t TEMPERATURE Creativity (0-2)
-mt MAX_TOKENS Max response tokens
-l LOAD Load previous session from fileKnowledge is retrieved via RAG (Retrieval-Augmented Generation) from indexed workspace content:
ragbot chat -profile personal -p "What are my travel preferences?"
# RAG enabled for workspace: personal
# [Response based on retrieved knowledge]Note: The legacy -d (dataset) flag has been removed. Use workspaces with RAG instead.
Legacy examples (deprecated)
Example 1:
rajivpant@RP-2021-MacBook-Pro ragbot % ./ragbot.py -d instructions/ datasets/public/ ../ragbot-data/datasets/personal/ ../ragbot-data/workspaces/my-employer/ -p "Write a short note in Rajiv's voice about Rajiv's job, coworkers, family members, and travel and food preferences for the person temporarily backfilling for his EA."
datasets being used:
- instructions/
- datasets/public/travel-food.md
- datasets/public/employment-history.md
- datasets/public/about.md
- datasets/public/biography.md
- ../ragbot-data/datasets/personal/accounts.md
- ../ragbot-data/datasets/personal/contact-info.md
- ../ragbot-data/datasets/personal/personal-family.md
- ../ragbot-data/workspaces/my-employer/company.md
Using AI engine openai with model gpt-4o
[redacted in this example]Example 2:
rajivpant@RP-2021-MacBook-Pro ragbot % ./ragbot.py -d instructions/ datasets/public/ -p "Write a short resume of Rajiv"
datasets being used:
- instructions/
- datasets/public/travel-food.md
- datasets/public/employment-history.md
- datasets/public/about.md
- datasets/public/biography.md
Using AI engine openai with model gpt-4o
[truncated in this example]Example 3:
./ragbot.py -p "Tell me a story about a brave knight and a wise wizard." -d datasets/story_charactersTo use Ragbot.AI in interactive mode, use the -i or --interactive flag without providing a prompt via command line or input file. In this mode, you can enter follow-up prompts after each response.
Example:
./ragbot.py -i -d datasets/story_charactersIn the first example, Ragbot.AI generates a short note in Rajiv's voice using the dataset files in the ../ragbot-data/datasets folder. In the second example, Ragbot.AI provides information on good practices for software development using the datasets/software_development.txt dataset file. In the third example, Ragbot.AI tells a story about a brave knight and a wise wizard using the dataset files in the datasets/story_characters folder.
rajivpant@RP-2021-MacBook-Pro ragbot % ./ragbot.py -d ragbot.py -p "if no dataset files are being used, then I want the code to show that."
datasets being used:
- ragbot.py
Using AI engine openai with model gpt-4o
To modify the code to show a message when no dataset files are being used, you can add an else statement after checking for the dataset files. Update the code in the `main()` function as follows:
\```python
if curated_dataset_files:
print("datasets being used:")
for file in curated_dataset_files:
print(f" - {file}")
else:
print("No dataset files are being used.")
\```
This will print "No dataset files are being used." when there are no dataset files detected.
rajivpant@RP-2021-MacBook-Pro ragbot %
Asking it to guess what some of the dataset files I use are for
rajivpant@RP-2021-MacBook-Pro ragbot % find datasets ../ragbot-data/datasets -print | ./ragbot.py -d instructions/ datasets/public/ ../ragbot-data/datasets/personal/ ../ragbot-data/workspaces/my-employer/ -p "What do you guess these files are for?"
datasets being used:
- instructions/
- datasets/public/travel-food.md
- datasets/public/employment-history.md
- datasets/public/about.md
- datasets/public/biography.md
- ../ragbot-data/datasets/personal/accounts.md
- ../ragbot-data/datasets/personal/contact-info.md
- ../ragbot-data/datasets/personal/personal-family.md
- ../ragbot-data/workspaces/my-employer/company.md
Using AI engine openai with model gpt-4o
These files appear to be related to the datasets of an AI system, likely for generating text or providing assistance based on the provided information. The files seem to be divided into two categories: public and private.
Public files:
- datasets/public/travel-food.md: Rajiv's travel and food preferences
- datasets/public/employment-history.md: Rajiv's employment history
- datasets/public/about.md: General information about Rajiv
- datasets/public/biography.md: Biography of Rajiv
Private files (stored in a separate private folder):
- datasets/personal/accounts.md: Semi-private personal account information, such as frequent flyer numbers or loyalty programs. Does not contain any confidential or sensitive information.
- datasets/personal/contact-info.md: Personal contact information, such as phone numbers and email addresses. Does not contain any confidential or sensitive information.
- datasets/personal/personal-family.md: Personal and family information, such as family members and relationships. Does not contain any confidential or sensitive information.
Workspace-specific files:
- workspaces/my-employer/company.md: Non-confidential, publicly available information related to the employer, including your role
Overall, these files seem to contain various information about a person, their preferences, and professional background, likely used to tailor the AI system's responses and assistance.
rajivpant@RP-2021-MacBook-Pro ragbot % Asking technical questions about a project
❗️ In the current version of Ragbot.AI, the --stdin and --prompt options are mutually exclusive, so the following example no longer works as is. In a future update to this README file, I will give an alternate example to obtain the similar results.
alexredmon@ar-macbook ~/s/scribe > cat docker-compose.yml | ragbot --stdin -p "which services will be exposed on which ports by running all services in the following docker-compose.yml file?"
In the given docker-compose.yml file, the following services are exposed on their respective ports:
1. "scribe" service: - Exposed on port 80 - Exposed on port 9009 (mapped to internal port 9009)
2. "scribe-feature" service: - Exposed on port 80
3. "scribe-redis" service: - Exposed on port 6379 (mapped to internal port 6379)
alexredmon@ar-macbook ~/s/scribe >Using the Anthropic engine with the Claude Instant model
rajivpant@RP-2021-MacBook-Pro ragbot % ./ragbot.py -e anthropic -m "claude-instant-v1" -p "Tell me 5 fun things to do in NYC."
No dataset files are being used.
Using AI engine anthropic with model claude-instant-v1
Here are 5 fun things to do in NYC:
1. Visit Central Park. Walk the paths, rent a paddle boat, visit the zoo, catch a Shakespeare in the Park performance.
2. Catch a Broadway show. New York is the center of the theater world with some of the greatest plays and musicals on Broadway and off Broadway.
3. Go to the top of the Empire State Building. Take in the panoramic views of all of NYC from one of its most famous landmarks.
4. Shop and dine in SoHo and the West Village. Explore trendy boutique shops and dig into meals at charming sidewalk cafes.
5. Take a free walking tour. There are numerous companies that offer guided walking tours of various NYC neighborhoods, covering history, culture, architecture and more.
rajivpant@RP-2021-MacBook-Pro ragbot % Random Creativity
❗️ In the current version of Ragbot.AI, the --stdin and --prompt options are mutually exclusive, so the following example no longer works as is. In a future update to this README file, I will give an alternate example to obtain the similar results.
alexredmon@ar-macbook ~ > cat names.csv
rajiv,
jim,
dennis,
alexandria
alexredmon@ar-macbook ~ > catnames.csv | ragbot.py --stdin -p "Generate a creative nickname for each of the following people"
rajiv, Rajiv Razzle-Dazzle
jim, Jolly JimJam
dennis, Daring Denmaster
alexandria, All-Star Alexi
alexredmon@ar-macbook ~ >

