Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
77 changes: 77 additions & 0 deletions .cursor/rules/langgraph-agents.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
---
description: LangGraph agent patterns and best practices
globs: backend/**/*.py
alwaysApply: false
---

# LangGraph Agent Patterns

## State Definition

Use TypedDict with Annotated for reducers:

```python
class TripState(TypedDict):
messages: Annotated[List[BaseMessage], operator.add]
trip_request: Dict[str, Any]
research: Optional[str]
tool_calls: Annotated[List[Dict[str, Any]], operator.add]
```

## Agent Structure

Each agent should:
1. Extract data from state
2. Build prompt with template
3. Bind tools and invoke LLM
4. Process tool calls if any
5. Return updated state

```python
def research_agent(state: TripState) -> TripState:
req = state["trip_request"]
destination = req["destination"]

messages = [SystemMessage(content=prompt)]
tools = [essential_info, weather_brief]
agent = llm.bind_tools(tools)

res = agent.invoke(messages)
# ... process tool calls ...

return {"research": output, "tool_calls": calls}
```

## Graph Building

Use parallel edges for independent agents:

```python
def build_graph():
g = StateGraph(TripState)

# Add nodes
g.add_node("research", research_agent)
g.add_node("budget", budget_agent)

# Parallel execution from START
g.add_edge(START, "research")
g.add_edge(START, "budget")

# Converge to final agent
g.add_edge("research", "itinerary")
g.add_edge("budget", "itinerary")
g.add_edge("itinerary", END)

return g.compile() # No checkpointer for stateless requests
```

## Observability

Use `using_attributes` and `using_prompt_template` for tracing:

```python
with using_attributes(tags=["research"]):
with using_prompt_template(template=prompt_t, variables=vars_):
res = agent.invoke(messages)
```
36 changes: 36 additions & 0 deletions .cursor/rules/project-overview.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
description: AI Trip Planner project overview and conventions
alwaysApply: true
---

# AI Trip Planner Project

A multi-agent system for generating travel itineraries using LangGraph, FastAPI, and LangChain.

## Architecture

- **Backend**: FastAPI app in `backend/main.py`
- **Agents**: 4 specialized agents (Research, Budget, Local, Itinerary) running in parallel via LangGraph
- **RAG**: Optional vector search over `backend/data/local_guides.json`
- **Observability**: Arize tracing (optional)

## Key Patterns

1. **Graceful Degradation**: Tools try real APIs first, fall back to LLM-generated responses
2. **Parallel Execution**: Research, Budget, and Local agents run simultaneously
3. **Feature Flags**: Use environment variables (`ENABLE_RAG`, `TEST_MODE`)

## Running the Server

```bash
cd backend
source .venv/bin/activate
uvicorn main:app --host 127.0.0.1 --port 8000
```

## Environment Variables

- `OPENAI_API_KEY` or `OPENROUTER_API_KEY` (required)
- `ENABLE_RAG=1` for vector search
- `TAVILY_API_KEY` for real web search
- `ARIZE_SPACE_ID` + `ARIZE_API_KEY` for tracing
63 changes: 63 additions & 0 deletions .cursor/rules/python-backend.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
---
description: Python backend conventions for FastAPI and LangChain
globs: backend/**/*.py
alwaysApply: false
---

# Python Backend Standards

## Imports Order

1. Standard library
2. Third-party (fastapi, langchain, etc.)
3. Local modules

## Pydantic Models

Use Pydantic for request/response validation:

```python
class TripRequest(BaseModel):
destination: str
duration: str
budget: Optional[str] = None
```

## Tool Definition

Use `@tool` decorator with clear docstrings:

```python
@tool
def essential_info(destination: str) -> str:
"""Return essential destination info like weather, sights, and etiquette."""
# Try API first, fall back to LLM
summary = _search_api(query)
if summary:
return summary
return _llm_fallback(instruction)
```

## Error Handling

Always use graceful degradation:

```python
# ✅ GOOD - Try API, fall back gracefully
try:
result = await external_api()
except Exception:
result = fallback_response()

# ❌ BAD - Let errors propagate
result = await external_api() # Will crash if API fails
```

## Environment Variables

Load via `python-dotenv` at module start:

```python
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
```
62 changes: 62 additions & 0 deletions .cursor/rules/rag-patterns.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
---
description: RAG and vector search patterns
globs: backend/**/*.py
alwaysApply: false
---

# RAG Patterns

## Feature Flag

RAG is opt-in via environment variable:

```python
ENABLE_RAG = os.getenv("ENABLE_RAG", "0").lower() not in {"0", "false", "no"}
```

## Document Loading

Convert JSON data to LangChain Documents:

```python
def _load_local_documents(path: Path) -> List[Document]:
raw = json.loads(path.read_text())
docs = []
for row in raw:
content = f"City: {row['city']}\nGuide: {row['description']}"
metadata = {"city": row["city"], "source": row.get("source")}
docs.append(Document(page_content=content, metadata=metadata))
return docs
```

## Retriever Class

Implement with fallback to keyword search:

```python
class LocalGuideRetriever:
def retrieve(self, destination: str, interests: str, k: int = 3):
if not ENABLE_RAG:
return []

# Try vector search first
if self._vectorstore:
return self._vector_search(destination, interests, k)

# Fall back to keyword matching
return self._keyword_fallback(destination, interests, k)
```

## Adding RAG Context to Agents

Inject retrieved context into agent prompts:

```python
if ENABLE_RAG:
retrieved = GUIDE_RETRIEVER.retrieve(destination, interests)
if retrieved:
context_lines = ["=== Curated Local Guides ==="]
for item in retrieved:
context_lines.append(item["content"])
prompt += "\n".join(context_lines)
```
74 changes: 74 additions & 0 deletions FIX_PERMISSIONS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Fix macOS Permission Issues for AI Trip Planner

## Quick Fix Steps

### Step 1: Grant Terminal Full Disk Access

1. Open **System Settings** (or System Preferences on older macOS)
2. Go to **Privacy & Security** → **Full Disk Access**
3. Click the **lock icon** 🔒 and enter your password
4. Click the **+** button to add an application
5. Navigate to **Applications** → **Utilities** → **Terminal**
6. Select **Terminal** and click **Open**
7. Make sure the checkbox next to Terminal is **checked** ✅
8. Close System Settings

### Step 2: Grant Terminal Network Access (if needed)

1. Open **System Settings** → **Privacy & Security** → **Network**
2. Look for **Terminal** in the list
3. If it's not there or blocked, add it and enable network access

### Step 3: Check Firewall Settings

1. Open **System Settings** → **Network** → **Firewall**
2. If Firewall is ON:
- Click **Options** or **Firewall Options**
- Click **+** to add an application
- Add **Python** (usually located at: `/usr/bin/python3` or your venv Python)
- Set it to **Allow incoming connections**
- Click **OK**

### Step 4: Restart Terminal

After making these changes:
1. **Quit Terminal completely** (Cmd+Q)
2. **Reopen Terminal**
3. Try running the server again

## Alternative: Use a Different Port

If permissions still don't work, try using a higher port number (above 1024):

```bash
cd /Users/jenny/Documents/ai-trip-planner
source backend/.venv/bin/activate
cd backend
uvicorn main:app --host 127.0.0.1 --port 3000
```

Then access at: http://localhost:3000

## Verify It's Working

After making changes, test with:

```bash
cd /Users/jenny/Documents/ai-trip-planner
source backend/.venv/bin/activate
cd backend
uvicorn main:app --host 127.0.0.1 --port 8000
```

You should see:
```
INFO: Uvicorn running on http://127.0.0.1:8000
```

## Still Having Issues?

If you still get "operation not permitted":
1. Make sure you're using `127.0.0.1` not `0.0.0.0`
2. Try ports: 3000, 5000, 8080, or 8888
3. Check if another app is using port 8000: `lsof -i :8000`
4. Restart your Mac (sometimes helps reset network permissions)
10 changes: 10 additions & 0 deletions backend/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -798,6 +798,16 @@ def serve_frontend():
return {"message": "frontend/index.html not found"}


@app.get("/frontend/{filename}")
def serve_frontend_file(filename: str):
"""Serve files from the frontend directory."""
here = os.path.dirname(__file__)
path = os.path.join(here, "..", "frontend", filename)
if os.path.exists(path):
return FileResponse(path)
return {"message": f"frontend/{filename} not found"}


@app.get("/health")
def health():
return {"status": "healthy", "service": "ai-trip-planner"}
Expand Down
Loading