An intelligent migration tool that analyzes your Terraform infrastructure and generates Railway configurations with detailed cost comparisons and migration guidance.
This tool helps you migrate from traditional IaaS (AWS, GCP, Azure) to Railway by:
- Parsing your existing Terraform configurations
- Analyzing current infrastructure costs using Infracost
- Translating resources to Railway equivalents using AI
- Generating ready-to-use Railway configuration files
- Providing detailed cost comparison and migration reports
- Support for monorepo deploys (separate
railway.jsonfiles for each subfolder) - Better indexing and use of Railway templates for out the box services
- Automated testing of
railway.json(eg draft deploy via API) to validate output - Intelligent infracost usage bands (eg based on size of estate, services used) to compare
- Calculating DevOps time saved for TCO comparison
- Automated Analysis - No manual resource inventory needed
- Cost Transparency - See exactly what you're spending now vs. Railway
- AI-Powered Translation - Smart mapping of IaaS resources to Railway services
- Ready-to-Deploy Configs - Generated Railway files you can use immediately
- Migration Guidance - Step-by-step instructions for your specific setup
- 💰 Cost Savings: 30-60% reduction in infrastructure costs
- ⚡ Faster Deployment: Minutes instead of hours to deploy
- 🔧 Less Maintenance: No infrastructure management overhead
- 📈 Better DX: Modern developer experience with built-in CI/CD
Primary Input: Path to your Terraform directory
- Terraform files (
.tfextension) - Can be a single module or entire infrastructure
- Both HCL and JSON formats supported
Environment Variables: (in .env file)
# Infracost credentials (for cost analysis)
INFRACOST_API_KEY=ics_v1_... # Service account token
INFRACOST_ORG_ID=your-org-id # Organization ID
INFRACOST_ORG_SLUG=your-org-name # Organization slug
# OpenRouter API (for AI translation)
OPENROUTER_API_KEY=sk-or-v1-...The tool creates a complete migration package in your specified output directory:
output-directory/
├── railway.json # Main Railway service configuration
├── railway-config.json # Additional Railway settings
├── migration-report.md # Detailed analysis and migration guide
└── provision-*.sh # Setup scripts for databases/storage
Terraform Files → Parser → Infracost → AI Translator → Railway Files + Report
- Reads all
.tffiles in input directory - Extracts resources, variables, outputs
- Identifies resource types and relationships
- Analyzes current infrastructure costs
- Provides breakdown by resource type
- Calculates monthly/hourly estimates
- Uses OpenRouter AI to intelligently map resources
- Terraform EC2 → Railway Service
- Terraform RDS → Railway PostgreSQL
- Terraform S3 → Railway Object Storage
- Handles complex relationships and dependencies
- Creates
railway.jsonwith service definitions - Generates provisioning scripts for databases
- Includes environment variable templates
- Cost comparison (current vs. Railway)
- Resource mapping details
- Step-by-step migration instructions
- Potential limitations and trade-offs
# Install Python packages
pip install -r requirements.txt
# Install Infracost (macOS)
brew install infracost
# Install Infracost (Linux)
curl -fsSL https://raw.githubusercontent.com/infracost/infracost/master/scripts/install.sh | shCreate a .env file in the project root:
# Get Infracost API key from https://www.infracost.io/
INFRACOST_API_KEY=ics_v1_your_service_account_token
INFRACOST_ORG_ID=your-org-id
INFRACOST_ORG_SLUG=your-org-slug
# Get OpenRouter API key from https://openrouter.ai/
OPENROUTER_API_KEY=sk-or-v1-your-api-keypython3 tf2railway.py --input /path/to/terraform --output ./migration-output# Read the migration report
cat migration-output/migration-report.md
# Check Railway configuration
cat migration-output/railway.json
# Review provisioning scripts
ls migration-output/provision-*.sh# Analyze a Terraform module
python3 tf2railway.py --input ./terraform/modules/web-app --output ./railway-migration# See detailed processing information
python3 tf2railway.py --input ./terraform/prod --output ./output --verbose# Try it with included samples
python3 tf2railway.py --input sample/aws_ec2_ebs_docker_host --output test-outputYour Terraform directory should contain:
terraform-directory/
├── main.tf # Main resources
├── variables.tf # Variable definitions
├── outputs.tf # Outputs (optional)
├── *.tf # Any other Terraform files
└── modules/ # Submodules (optional)
└── ...
Supported Resources:
- Compute: EC2, ECS, Lambda, Compute Engine, App Engine
- Databases: RDS, DynamoDB, Cloud SQL, DocumentDB
- Storage: S3, EBS, Cloud Storage
- Networking: VPC, Load Balancers, CloudFront
- And more...
The heart of the migration package:
-
Executive Summary
- Current infrastructure overview
- Railway migration plan summary
-
Cost Analysis
- Current monthly costs (from Infracost)
- Estimated Railway costs
- Potential savings calculation
-
Resource Mapping
- How each Terraform resource maps to Railway
- What gets replaced by Railway built-ins
-
Migration Steps
- Detailed step-by-step instructions
- Commands to run
- Manual configuration needed
-
Limitations & Trade-offs
- Features not available in Railway
- When Railway is/isn't a good fit
Railway service configuration that includes:
- Service definitions
- Build settings
- Environment variables
- Port configurations
- Healthcheck endpoints
Executable bash scripts to set up:
- PostgreSQL/MySQL databases
- Redis instances
- Object storage buckets
- Other Railway resources
python3 tf2railway.py [OPTIONS]
Options:
--input, -i PATH Path to Terraform directory (required)
--output, -o PATH Output directory (default: ./railway-migration)
--verbose, -v Enable verbose output
--help Show help message| Variable | Description | Required | Example |
|---|---|---|---|
INFRACOST_API_KEY |
Service account token | Yes | ics_v1_... |
INFRACOST_ORG_ID |
Organization UUID | Yes | 960e4c59-... |
INFRACOST_ORG_SLUG |
Organization name | Yes | my-org |
OPENROUTER_API_KEY |
AI API key | Yes | sk-or-v1-... |
- Parses your Terraform files
- Queries Infracost API for pricing data
- Calculates costs based on:
- Resource types
- Instance sizes
- Storage amounts
- Data transfer estimates
- Regional pricing
The tool estimates Railway costs based on:
- Number of services
- Expected resource usage
- Database requirements
- Storage needs
- Railway's pricing tiers
Note: Railway costs can vary with actual usage. The estimates are starting points.
- Small projects (< 10 resources): ~30 seconds
- Medium projects (10-50 resources): 1-2 minutes
- Large projects (> 50 resources): 2-5 minutes
Processing time includes:
- Terraform parsing
- Infracost API calls
- AI translation
- File generation
Problem: INFRACOST_API_KEY not set
# Solution: Configure in .env or via CLI
infracost configure set api_key YOUR_KEYProblem: Infracost installation fails
# macOS: Use Homebrew
brew install infracost
# Linux: Direct install
curl -fsSL https://raw.githubusercontent.com/infracost/infracost/master/scripts/install.sh | shProblem: OpenRouter API errors
- Check your API key is valid
- Ensure you have credits/balance
- Try again (temporary API issues)
Problem: No resources found
- Verify .tf files exist in input directory
- Check file permissions
- Ensure valid HCL syntax
-
Infracost
- Sign up at infracost.io
- Create a service account token
- Copy your organization details
-
OpenRouter
- Sign up at openrouter.ai
- Add credits to your account
- Generate API key from dashboard
Contributions welcome! Areas for improvement:
- Additional cloud provider support
- More resource type mappings
- Enhanced cost estimation
- Better error handling
MIT License - Feel free to use and modify for your needs.
This tool implements advanced AI best practices to maximize accuracy and reliability:
-
Self-Healing Validation Layer - Automatically detects and flags:
- Deprecated features (NIXPACKS builder)
- Incorrect naming (
mongodbvsmongo) - Fake CLI commands
- Functions vs Services logic errors
-
Confidence Scoring (0-100) - Every migration gets a confidence score:
- HIGH (85-100): Proceed with migration
- MEDIUM (70-84): Review carefully
- LOW (0-69): Manual review required
-
Few-Shot Learning - Uses verified migration examples:
- Lambda → Functions patterns (100% accurate)
- Complex Lambda → Service patterns
- Real-world tested examples
-
Chain-of-Thought Reasoning - For complex migrations:
- Systematic analysis of what CAN migrate
- Honest assessment of what SHOULD STAY on cloud
- Hybrid architecture recommendations
-
Schema Validation with Retries - Ensures reliable responses:
- Validates JSON structure
- Auto-retries on malformed responses (max 3 attempts)
- Prevents parsing failures
-
Documentation Freshness Checks - Ensures accuracy:
- Validates Railway docs are current (<30 days old)
- Warns if documentation needs refresh
- Prevents outdated information
✅ 100% Accuracy on AWS Lambda migrations
✅ 90/100 Confidence on standard migrations (Fargate, ECS, WordPress)
✅ 30/100 Confidence on complex migrations (MSK, BigQuery) - correctly flags for review
✅ Maintains honesty about Railway limitations
Example Output:
🔍 Step 2.5: Validating AI translation...
✅ Validation passed - no critical issues
📊 Step 2.6: Calculating migration confidence score...
✅ Confidence Score: 90/100 (HIGH)
Factors:
• Clear Railway resource mappings identified
📋 Proceed with migration - high confidence in translation accuracy
What it does: Automatically checks the AI's output against known error patterns from learnings.md
How it works:
After AI generates Railway config:
├─ Check for deprecated NIXPACKS builder → Flag if found
├─ Validate database naming (mongo vs mongodb) → Flag if wrong
├─ Scan for fake CLI commands (railway volume) → Flag if present
├─ Verify Functions logic (size <96KB check) → Warn if suspicious
└─ Ensure migration_notes are comprehensive → Suggest if missingReal example: If AI generates "builder": "NIXPACKS", validator catches it before you deploy:
🔴 CRITICAL: Uses deprecated NIXPACKS builder. Should use RAILPACK or DOCKERFILE.
What it does: Calculates a 0-100 confidence score based on migration complexity and known limitations
How it works:
Start with 100 points, then deduct for:
├─ Unsupported services (MSK: -30, BigQuery: -30)
├─ High complexity (>100 resources: -15, >50: -10)
├─ Many limitations (>5: -15, >3: -10)
├─ Hybrid recommendations (keep on cloud: -20)
└─ Complex networking (>10 resources: -10)
Score 85-100 = HIGH (proceed)
Score 70-84 = MEDIUM (review carefully)
Score 0-69 = LOW (manual review required)Real example: AWS MSK gets 30/100 (LOW) because:
- Contains MSK (-30 points)
- Complex networking (-10 points)
- Recommended hybrid architecture (-20 points)
What it does: Ensures the AI has access to current Railway documentation
How it works:
On each migration:
├─ Check railway-official-docs.md exists → Error if missing
├─ Calculate file age in days
├─ Compare against thresholds:
│ ├─ <14 days = CURRENT ✅
│ ├─ 14-30 days = AGING ⚠️
│ └─ >30 days = STALE ❌ (requires refresh)
└─ Provide curl command to update if neededReal example: Prevents AI from using outdated pricing or deprecated features:
⚠️ Railway documentation is 25 days old
📥 Consider refreshing: curl -s https://docs.railway.com/api/llms-docs.md > railway-official-docs.md
What it does: Shows the AI verified examples before it generates your migration
How it works:
When Lambda detected in Terraform:
├─ Inject example 1: HTTP Lambda → Railway Function ✅
├─ Inject example 2: Cron Lambda → Railway Function ✅
├─ Inject example 3: Complex Lambda → Railway Service ✅
└─ AI learns correct patterns from these examples
Prompt becomes:
System: [Railway docs + verified examples]
User: [Your Terraform]
AI: [Generates migration using learned patterns]Real benefit: Lambda migrations achieve 100% accuracy because AI sees correct patterns first.
What it does: Guides AI through systematic reasoning for complex migrations (>30 resources or MSK/BigQuery/Kafka)
How it works:
For complex infrastructures, AI follows these steps:
STEP 1: What CAN migrate to Railway?
└─ List: Lambda, EC2, ECS, RDS, S3...
STEP 2: What SHOULD STAY on cloud provider?
└─ Identify: MSK (no Railway Kafka), BigQuery (no Railway warehouse)
└─ Document WHY they must stay
STEP 3: Map migrateable resources
├─ Lambda <96KB → Functions
├─ Lambda >96KB → Services
├─ EC2/ECS → Services
└─ RDS → Databases
STEP 4: Generate config + justification
└─ Include lambda_analysis, limitations, hybrid architecture stepsReal example: For MSK, AI reasons:
- "MSK is managed Kafka"
- "Railway has no managed Kafka"
- "Must keep MSK on AWS"
- "Recommend Redis for simple queuing"
- "Provide hybrid architecture guidance"
What it does: Ensures AI returns valid JSON and retries if malformed
How it works:
Attempt 1: Get AI response
├─ Validate JSON structure
├─ Check required keys exist
└─ Verify data types correct
If validation fails:
├─ Attempt 2: Retry with same prompt
├─ Attempt 3: Final retry
└─ If still fails: return with warning
Validated structure:
{
"functions": [...], // Must be array
"services": [...], // Must be array
"databases": [...], // Must be array
"migration_notes": {...} // Must be object
}Real benefit: Eliminates "JSON parse error" failures that would stop your migration.
What it does: Adapts the prompt based on what's being migrated
How it works:
Base prompt = Official Railway docs
IF Terraform contains Lambda:
ADD few-shot Lambda examples
IF migration has >30 resources OR MSK/BigQuery:
ADD chain-of-thought reasoning instructions
IF migration has databases:
EMPHASIZE correct naming (mongo not mongodb)
Final prompt = Contextually enhanced for your specific caseReal benefit: Simple Lambda gets simple prompt; complex MSK gets detailed reasoning guidance.
What it does: Learns from real deployment outcomes to improve future migrations
How it works:
After each deployment:
├─ YOU record outcome: success or failure
├─ System stores: config, errors, resource types
└─ Builds historical database (deployment-feedback.jsonl)
Analysis identifies patterns:
├─ Lambda → Functions success rate: 87.5%
├─ Lambda → Services success rate: 100%
├─ Common error: "function exceeded 96kb limit"
└─ Recommendation: Prefer Services for Lambda with dependencies
Future migrations benefit:
├─ AI sees: "Functions have 87.5% success, often hit size limit"
├─ AI adjusts: More conservative with Function recommendations
└─ Accuracy improves over timeComplete workflow:
# 1. Generate migration
python3 tf2railway.py --input terraform/ --output out/
# 2. Deploy to Railway
cd out && railway up
# 3. Record result
python3 tf2railway-feedback.py record \
--config out/railway-config.json \
--success # or --failure with --error "message"
# 4. Build insights (after 5+ deployments)
python3 tf2railway-feedback.py analyze
# 5. Update learnings
python3 tf2railway-feedback.py analyze --update-learningsReal example output:
📊 DEPLOYMENT FEEDBACK ANALYSIS
Total Deployments: 15
Success Rate: 86.7% (13 ✅ / 2 ❌)
💡 Key Insights:
• Railway Functions: 87.5% success (7/8)
• Common error: function exceeded 96kb limit
📝 Recommendation: For Lambda with dependencies, use Service instead
The learning cycle:
- Deploy → 2. Record → 3. Analyze → 4. Update prompts → 5. Better next time
For full details, see AI_BEST_PRACTICES.md
The tool includes a feedback system to track deployment outcomes and continuously improve:
After deploying to Railway, record whether it succeeded or failed:
# Record successful deployment
python3 tf2railway-feedback.py record \
--config output/railway-config.json \
--success \
--notes "Production deployment went smoothly"
# Record failed deployment with errors
python3 tf2railway-feedback.py record \
--config output/railway-config.json \
--failure \
--error "Function exceeded 96KB limit" \
--notes "Switched to Service instead"View insights from all recorded deployments:
# See success rates and common errors
python3 tf2railway-feedback.py analyze
# Update learnings.md with insights
python3 tf2railway-feedback.py analyze --update-learnings
# List recent deployments
python3 tf2railway-feedback.py list --limit 20Example Analysis:
📊 DEPLOYMENT FEEDBACK ANALYSIS
Total Deployments: 15
Success Rate: 86.7% (13 ✅ / 2 ❌)
💡 Key Insights:
• Lambda migrations: 100.0% success (8/8)
• Railway Functions: 87.5% success (7/8)
• Common errors: function exceeded 96kb limit
This feedback loop helps the AI learn from real-world deployments and improve future migrations.
- AI Accuracy: AI translation may not be perfect for complex setups (confidence scoring helps identify these)
- Cost Estimates: Railway costs are estimates, actual may vary
- Manual Steps: Some configuration requires manual intervention
- Resource Coverage: Not all Terraform resources have Railway equivalents
- Clean Terraform Code: Well-organized Terraform with clear resource names produces better results
- Start Small: Test with a single module before running on entire infrastructure
- Review Carefully: Always review generated configurations before deploying
- Test Migration: Use Railway's preview environments to test before production
- Incremental Migration: Migrate one service at a time for complex applications
Questions? Check the migration report for specific guidance on your infrastructure.