NoDetours is a comprehensive travel planning application that uses Large Language Models (LLMs) to create personalized travel itineraries based on user preferences.
NoDetours creates detailed, personalized travel plans with:
- Custom Itineraries: Day-by-day schedules tailored to your preferences
- Packing Lists: Customized recommendations based on destination and activities
- Budget Estimates: Detailed cost breakdowns for different spending levels
- Calendar Integration: Export your itinerary directly to your calendar
The system processes natural language requests like "Help me plan a 7-day trip to Paris focusing on museums and local food" and generates comprehensive travel recommendations.
- Natural Language Understanding: Simply describe your travel plans in plain English
- Preference Extraction: Automatically identifies destinations, durations, and preferences
- Contextual Information: Gathers weather forecasts, location details, and search results
- Multi-Modal Output: Provides itineraries, packing lists, and budget estimates
- Web Interface: Clean, intuitive UI with tabbed results display
- Calendar Export: Download your itinerary as an ICS file
The application consists of several components:
- Feature Extraction: Processes natural language input to identify travel preferences
- Search Query Generation: Creates effective search queries to gather relevant information
- Context Collection: Aggregates data from multiple sources (search, weather, maps)
- Output Generation: Produces detailed travel plans using an LLM
- Web Interface: Provides an intuitive user experience
nodetours/
├── api/ # API modules for external services
│ ├── app.py # FastAPI backend for web application
│ ├── llm_provider.py # Unified interface for LLM providers
│ ├── maps.py # Maps API for location information
│ ├── scrape.py # Web scraping utilities
│ ├── search.py # Search API wrapper
│ └── weather.py # Weather API for forecast data
├── app/ # Core application modules
│ ├── agent.py # Main Travel Planner Agent
│ └── modules/ # Specialized modules
│ ├── context_collector.py # Information aggregation
│ ├── guardrail.py # Input validation
│ ├── output_generator.py # Travel plan generation
│ ├── search_query_extractor.py # Feature extraction
│ └── search_query_generator.py # Query generation
├── config/ # Configuration files
│ ├── config.yaml # Main configuration
│ └── eval_config.yaml # Evaluation configuration
├── eval-data/ # Evaluation datasets
│ ├── feature_extractor_data.json
│ ├── search_query_data.json
│ └── travel_assistant_data.json
├── evaluation_runs/ # Evaluation output directory
├── static/ # Web UI assets
│ ├── css/
│ │ └── styles.css # Application styling
│ └── js/
│ └── app.js # Frontend JavaScript
├── templates/ # HTML templates
│ └── index.html # Main application page
├── utils/ # Utility functions
│ └── helpers.py # Helper utilities
├── evaluator.py # LLM evaluation module
├── generate_report.py # Evaluation report generator
├── LICENSE # Apache 2.0 license
├── main.py # Application entry point
├── README.md # Project documentation
├── requirements.txt # Python dependencies
└── run_evaluation.py # Evaluation pipeline script
- Clone the repository and set up your Python environment:
# Create and activate virtual environment (optional)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt- Create a
.envfile with your API keys:
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
WEATHER_API_KEY=your_weather_api_key
MAPS_API_KEY=your_maps_api_key
FIRECRAWL_API_KEY=your_firecrawl_key
- Run the application:
python main.pyFor web interface:
python main.py --config config/config.yamlFor CLI mode:
python main.py --config config/config.yaml --cliOnce the application is running, access the web interface at http://localhost:8000 (or the configured host/port).
- Enter your travel plans in the text input field, e.g., "Help me plan a 3-day trip to Chicago with museums, parks, and local food"
- Click "Create Travel Plan" to generate your personalized itinerary
- View your results in the tabbed interface:
- Itinerary: Day-by-day travel plan
- Packing List: Recommendations for what to bring
- Budget: Estimated costs for your trip
- Click "Download Itinerary Calendar" to export your plans to an ICS file
When running in CLI mode:
- Enter your travel plans when prompted
- View your itinerary, packing list, and budget estimate in the console
- Type 'exit' to quit
NoDetours includes a comprehensive evaluation system to compare different LLM providers for travel planning:
- Tests multiple LLM providers (OpenAI GPT-3.5/4, Anthropic Claude, etc.)
- Evaluates performance on various metrics (accuracy, relevance, completeness, usefulness, creativity)
- Uses a "judge" LLM to rate responses
- Generates detailed reports and visualizations
To run the full evaluation pipeline:
python run_evaluation.py --config config/eval_config.yaml --data eval-data/travel_assistant_data.json --sample-size 10Parameters:
--config: Path to the configuration file (default: config.yaml)--data: Path to the test data file (default: eval-data/feature_extractor_data.json)--sample-size: Number of test cases to sample (optional)--output-dir: Base directory for evaluation outputs (default: evaluation_runs)--skip-evaluation: Skip evaluation and use existing results file--results-file: Path to existing results file (if skipping evaluation)
If you already have evaluation results and just want to generate reports:
python run_evaluation.py --skip-evaluation --results-file path/to/evaluation_results.jsonThe system evaluates travel plans on these metrics (scale 1-10):
- Accuracy: How accurately the plan addresses user requirements
- Relevance: How relevant the recommendations are to user preferences
- Completeness: How comprehensive and detailed the plan is
- Usefulness: How practical and helpful the information would be
- Creativity: How innovative and personalized the suggestions are
You can modify the config.yaml file to:
- Add or remove LLM providers
- Change model parameters (temperature, max tokens)
- Configure evaluation metrics
- Set up API providers for weather, maps, and search
Example configuration for an LLM provider:
llm_providers:
openai_gpt4:
provider: "openai"
model: "gpt-4"
temperature: 0.7
max_tokens: 4000To add support for a new LLM provider:
- Add the provider configuration to
config.yaml - Ensure the LLMProvider class in
api/llm_provider.pysupports the new provider - Add the necessary API key to your
.envfile
To integrate a new external service:
- Create a new wrapper class in the
api/directory - Update the
config.yamlfile with the new provider - Modify the
context_collector.pyto use the new service
- Surya Krishna Guttikonda
- Monesh Rallapalli
- Anish Sudarshan Gada
- Prajwal Manohar
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.