Common questions and troubleshooting for RMAgent.
- General Questions
- Installation Issues
- Configuration Issues
- Database Issues
- LLM Provider Issues
- Performance Issues
- Output Quality Issues
- Privacy & Ethics
RMAgent is an AI-powered command-line tool for analyzing RootsMagic databases, generating biographical narratives, and conducting genealogical research. It uses large language models (Claude, GPT, or local Llama) to create natural-language biographies and answer questions about your family tree.
No. RMAgent works with RootsMagic database files (.rmtree), but you don't need an active RootsMagic subscription. You just need access to your database file.
Currently only RootsMagic 11 is supported. The .rmtree database format is specific to RM11.
Note: Earlier versions (RM7, RM8, RM9, RM10) use different database formats and are not compatible.
For most features: No. You can use:
- Data quality checks (no AI required)
- Template-based biographies (
--no-aiflag) - Person queries
- Timeline generation
- Hugo exports
For AI-powered features: Yes. You need an API key for:
- AI-generated biographies
- Interactive Q&A (
askcommand)
Cost-free option: Use Ollama with local models (no API key needed).
Software: RMAgent is free and open-source.
LLM API Costs:
- Anthropic Claude: ~$0.01-0.05 per biography (typical)
- OpenAI GPT-4o-mini: ~$0.005-0.02 per biography
- Ollama: Free (runs locally)
Partial offline use:
- Database queries: ✅ Yes (offline)
- Data quality checks: ✅ Yes (offline)
- Template biographies: ✅ Yes (offline)
- AI biographies: ❌ No (requires API call)
- Q&A: ❌ No (requires API call)
Full offline use with Ollama:
- Install Ollama and download models locally
- All features work offline
Problem: Your Python version is too old.
Solution:
# Check current version
python3 --version
# Install Python 3.11 or higher
# macOS
brew install python@3.11
# Linux
sudo apt install python3.11Problem: uv is not installed or not in PATH.
Solution:
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Restart terminal
source ~/.bashrc # or ~/.zshrc
# Verify
uv --versionAlternative: Use pip instead:
pip install -e .Problem: SQLite ICU extension not found or incompatible.
Solution:
macOS:
# Extension should be included
ls -l sqlite-extension/icu.dylib # Verify file existsLinux:
# You may need to compile the extension
sudo apt install libsqlite3-dev libicu-dev
cd sqlite-extension
# Follow compilation instructions in sqlite-extension/README.mdWorkaround: Contact support if the extension won't load.
Problem: Network issues or slow dependency resolution.
Solution:
# Try with verbose output
uv sync -v
# Try with pip instead
pip install -e .
# Check network connection
curl -I https://pypi.org/Problem: Package not installed or virtual environment not activated.
Solution:
With uv:
# Reinstall
uv sync
# Run with uv prefix
uv run rmagent person 1With pip:
# Activate virtual environment
source .venv/bin/activate
# Reinstall
pip install -e .
# Run
rmagent person 1Problem: Configuration file doesn't exist.
Solution:
# Create from example
cp config/.env.example config/.env
# Edit with your settings
nano config/.env # or use your text editorProblem: LLM API key is incorrect, expired, or not set.
Solution:
-
Check your
config/.envfile:cat config/.env | grep API_KEY -
Verify no extra spaces or quotes:
# Wrong ANTHROPIC_API_KEY="sk-ant-xxxxx" # Remove quotes ANTHROPIC_API_KEY= sk-ant-xxxxx # Remove space # Correct ANTHROPIC_API_KEY=sk-ant-xxxxx
-
Get a new API key:
- Anthropic: https://console.anthropic.com/
- OpenAI: https://platform.openai.com/
-
Check billing is enabled (Anthropic/OpenAI require valid payment method)
Problem: Config file exists but settings aren't applied.
Solution:
-
Verify file location:
ls -l config/.env # Must be in config/ directory -
Check file contents:
cat config/.env
-
Try absolute path:
RM_DATABASE_PATH=/full/path/to/database.rmtree
-
Use command-line override:
uv run rmagent --database /path/to/database.rmtree person 1
Problem: Database file doesn't exist at specified path.
Solution:
-
Check file exists:
ls -lh data/*.rmtree -
Update
config/.envwith correct path:RM_DATABASE_PATH=data/your-actual-database.rmtree
-
Copy database to correct location:
mkdir -p data cp /path/to/your/database.rmtree data/
Problem: Database file is corrupted.
Solution:
-
Make a backup first:
cp data/database.rmtree data/database-backup.rmtree
-
Try opening in RootsMagic and run "Database Tools > Test Database Integrity"
-
If RootsMagic can repair it, export and reimport your data
-
Restore from backup if available
Problem: Wrong database version or file is not a RootsMagic database.
Solution:
-
Verify it's a RootsMagic 11 database (.rmtree extension)
-
Check database in SQLite:
sqlite3 data/database.rmtree ".tables"Should show tables like:
PersonTable,EventTable,NameTable, etc. -
If tables are missing, the file may be corrupted
Problem: RMNOCASE collation not loaded (ICU extension issue).
Solution:
See "Could not load ICU extension" above.
Problem: Too many API requests in short time.
Solution:
-
Wait a few seconds and retry
-
Reduce batch size:
# Instead of exporting 100 people at once uv run rmagent export hugo --batch-ids 1,2,3,4,5 # Process in smaller batches
-
Increase delay between requests (programmatic use only)
Problem: Model name is incorrect or model is deprecated.
Solution:
-
Check current models at https://platform.openai.com/docs/models
-
Update
config/.env:# If using old model name OPENAI_MODEL=gpt-4o-mini # Current model
-
Common current models:
gpt-4o-mini(fast, cheap)gpt-4o(balanced)gpt-5-chat-latest(if you have access)
Problem: Ollama server not running.
Solution:
-
Start Ollama server:
ollama serve
-
Verify it's running:
curl http://localhost:11434/api/tags
-
Check if model is downloaded:
ollama list
-
Pull model if missing:
ollama pull llama3.1
Problem: Requested model not downloaded.
Solution:
# List available models
ollama list
# Pull the model you need
ollama pull llama3.1
# Update config/.env
OLLAMA_MODEL=llama3.1Problem: Quality tests processing large database.
Solution:
Already fixed in Phase 5! The persistent caching system speeds this up dramatically:
# First run may be slow (builds cache)
uv run pytest tests/unit/test_quality.py # ~40s
# Subsequent runs are fast (uses cache)
uv run pytest tests/unit/test_quality.py # ~0.3sCache is automatically invalidated when database file changes.
Problem: LLM API latency.
Solution:
-
Use faster model:
# OpenAI GPT-4o-mini is fastest OPENAI_MODEL=gpt-4o-mini -
Use Ollama locally:
# No network latency DEFAULT_LLM_PROVIDER=ollama -
Reduce max tokens:
LLM_MAX_TOKENS=2000 # Faster than 3000+ -
Use template mode for testing:
uv run rmagent bio 1 --no-ai # Instant, no API call
Problem: Processing tens of thousands of records.
Solution:
-
Use category filters to check specific areas:
uv run rmagent quality --category logical # Fast -
Cache results are automatic (see above)
-
Focus on high-severity issues:
uv run rmagent quality --severity critical # Very fast
Problem: Not enough data in database or LLM parameters too restrictive.
Solution:
-
Use comprehensive length:
uv run rmagent bio 1 --length comprehensive
-
Increase max tokens:
LLM_MAX_TOKENS=4000 # Allow longer responses -
Increase temperature (more creative):
LLM_TEMPERATURE=0.5 # Default is 0.2 -
Check database has sufficient data:
uv run rmagent person 1 --events --family
-
Try different provider:
- Anthropic Claude generally produces better genealogical narratives
- OpenAI GPT-4o-mini may be too concise
Problem: AI hallucination or database data quality issues.
Solution:
-
Check source data:
uv run rmagent person 1 --events --family
-
Run quality checks:
uv run rmagent quality --person 1 # Check for inconsistencies -
Lower temperature (more factual):
LLM_TEMPERATURE=0.1 # More deterministic -
Use template mode first to verify data:
uv run rmagent bio 1 --no-ai # Pure database facts -
Always verify AI-generated content before publishing
Problem: Database missing source data or citation style issue.
Solution:
-
Check source data in database:
# Run quality check for source documentation uv run rmagent quality --category sources -
Try different citation style:
uv run rmagent bio 1 --citation-style narrative # More readable -
Check RootsMagic database has citations linked to events
-
Use footnote style for academic work:
uv run rmagent bio 1 --citation-style footnote
Problem: Events missing dates or marked private.
Solution:
-
Check person has dated events:
uv run rmagent person 1 --events
-
Include family events:
uv run rmagent timeline 1 --include-family
-
Check privacy settings:
- Events marked
IsPrivate=1are excluded by default - Adjust
config/.envif needed (see Privacy section)
- Events marked
-
Verify dates are valid:
uv run rmagent quality --category dates
Solution:
-
Enable privacy settings in
config/.env:RESPECT_PRIVATE_FLAG=true # Honor IsPrivate flags APPLY_110_YEAR_RULE=true # Protect recent living persons
-
Mark people private in RootsMagic:
- Right-click person → Edit Person → Privacy tab → Check "Private"
-
Review before publishing:
# Check what will be exported uv run rmagent bio 1 # Review content
Yes, when using cloud LLM providers (Anthropic, OpenAI):
- Prompts (person data, events, places) are sent to API
- Responses are generated by their servers
- Follow their data retention policies
No, when using Ollama:
- Everything runs locally on your machine
- No data sent to external servers
- Complete privacy
Recommendation: Use Ollama for sensitive/private data.
Yes, but:
-
Software license: Check LICENSE file (MIT license - generally permissive)
-
LLM provider terms:
- Anthropic: Check https://www.anthropic.com/legal/consumer-terms
- OpenAI: Check https://openai.com/policies/terms-of-use
- Ollama: No restrictions (local)
-
Verify AI-generated content: Always review before publishing
-
Respect copyright: Source citations must be accurate
-
Privacy laws: Comply with GDPR, CCPA, etc.
No - always verify!
Best practices:
-
Use AI as a writing assistant, not a source of truth
-
Verify all facts against database:
uv run rmagent person 1 --events --family
-
Check for logical consistency:
uv run rmagent quality --category logical
-
Review sources and citations
-
Use lower temperature for factual content:
LLM_TEMPERATURE=0.1 # More deterministic -
Compare with template output:
uv run rmagent bio 1 --no-ai # Facts only uv run rmagent bio 1 # AI version
- Installation:
INSTALL.md - Usage:
USAGE.md - Configuration:
CONFIGURATION.md - Examples:
EXAMPLES.md - User Guide:
docs/USER_GUIDE.md
- GitHub Issues: https://github.com/miams/rmagent/issues
- Discussions: https://github.com/miams/rmagent/discussions
When reporting bugs, include:
- RMAgent version:
uv run rmagent --version - Python version:
python3 --version - Operating system
- Full error message
- Steps to reproduce
- Contents of
config/.env(remove API keys!)
Open an issue with:
- Use case description
- Expected behavior
- Why existing features don't meet the need
- Proposed solution (if any)
Still have questions? Open an issue on GitHub: https://github.com/miams/rmagent/issues