MyHealthAgent is a GenAI‐powered, multi‐agent nutrition assistant designed to help users log meals, extract medical conditions from lab‐report images, and receive real‐time, personalized dietary guidance.
- Engineered a GenAI‐powered nutrition assistant that analyzes user chronic conditions and meal logs to deliver real‐time, personalized dietary recommendations in under 2 seconds.
- Formulated a condition‐to‐nutrition pipeline that:
1. Extracts user medical conditions from lab‐report images via OCR + a LLaMA‐based model.
2. Computes 10+ macro‐ and micro‐nutrient thresholds tailored to those conditions. - Designed a Web App enabling users to chat with a Multi‐Agent System for:
- Condition‐specific food recommendations
- Dietary planning
- Food image classification (∼90 % ingredient‐recognition accuracy)
┌───────────────────────────────────────────────────────────────────┐
│ MyHealthAgent │
│ ┌──────────────────────────┐ ┌───────────────────────────────┐ │
│ │ Frontend (/frontend)│ │ Backend (/app.py, etc.) │ │
│ │ ┌──────────────────────┐ │ │ ┌───────────────────────────┐ │ │
│ │ │ React App (chat UI) │ │ │ │ Flask API Endpoints │ │ │
│ │ │ • /src, /public, │ │ │ │ • /api/chat │ │ │
│ │ │ /build, package.json │ │ │ • /api/get_thresholds │ │ │
│ │ └──────────────────────┘ │ │ │ • /api/process_meal │ │ │
│ │ │ │ │ • /api/extract_diseases │ │ │
│ │ │ │ │ │ │ │
│ │ │ │ │ LangGraph StateGraph │ │ │
│ │ │ │ │ • Tracks “current intent”│ │ │
│ │ │ │ │ per‐session │ │ │
│ │ │ │ │ • Routes to greeting, │ │ │
│ │ │ │ │ food_logging, planning,│ │ │
│ │ │ │ │ health_advice, or other│ │ │
│ │ │ │ │ Agents (LLM calls) │ │ │
│ └──────────────────────────┘ │ │ └─────────────────────────┘ │ │
│ │ │ │ │
│ │ │ ┌─────────────────────────┐│ │
│ │ │ │ Food‐classification Model│ │
│ │ │ │ (ResNet‐based Food101) ││ │
│ │ │ └─────────────────────────┘│ │
│ │ │ ┌─────────────────────────┐│ │
│ │ │ │ OCR + LLaMA model for ││ │
│ │ │ │ disease‐extraction ││ │
│ │ │ └─────────────────────────┘│ │
│ │ │ │ │
│ │ │ ┌─────────────────────────┐│ │
│ │ │ │ Nutrient‐Calculator ││ │
│ │ │ │ (Combine thresholds,etc)││ │
│ │ │ └─────────────────────────┘│ │
│ │ │ │ │
│ │ │ ┌─────────────────────────┐│ │
│ │ │ │ SQLite / In‐Memory Store││ │
│ │ │ │ (Session states, logs) ││ │
│ │ │ └─────────────────────────┘│ │
│ │ │ │ │
│ │ │ └─────────────────────────┘│ │
│ │ │ │ │
└───────────────────────────────────────────────────────────────────┘
-
Frontend (folder:
/frontend)- Built with React (v18+) and served by
npm run start(development) ornpm run build(production). - Provides a chat interface that calls the Flask backend via REST.
- Built with React (v18+) and served by
-
Backend
- Written in Flask (v2.0+). Entry point:
app.pyin the repo root. - Defines API endpoints:
POST /api/chat– receives a user message, classifies intent, and routes through a LangGraph‐controlled multi‐agent.POST /api/get_thresholds– sets user diseases, computes nutrient thresholds (vianutrient_calculator.py).POST /api/process_meal– handles image‐ or text‐based meal logging (food classification + nutrient accumulation).POST /api/extract_diseases– runs OCR on a medical image, uses a LLaMA‐based prompt to extract disease names.
- Written in Flask (v2.0+). Entry point:
-
Models (folder:
/models)- food101_model.pth – Pretrained PyTorch model (ResNet‐based) for food classification.
-
Key Python Modules (in the root):
food_model.py– Loadsmodels/food101_model.pth, definesload_model,predict_food, and image preprocess.nutrient_calculator.py– Containsget_food_nutrients(...)(lookup in a nutrient DB) andcombine_thresholds(diseases)logic.app.py– Main Flask application (described above).requirements.txt– Pin all Python dependencies.
-
Jupyter Notebook
Image_Classification.ipynb– Demo / prototype notebook for training / testing the food classifier.
-
Static Assets
blood-sugar-levels-example.jpg,pizza.jpg– Example images used for testing OCR or classifier.
MyHealthAgent/
├── app.py
├── food_model.py
├── nutrient_calculator.py
├── requirements.txt
├── Image_Classification.ipynb
├── blood-sugar-levels-example.jpg
├── pizza.jpg
├── models/
│ └── food101_model.pth
├── frontend/
│ ├── build/ # Auto‐generated by `npm run build` (production)
│ ├── node_modules/ # Auto‐generated by `npm install`
│ ├── public/
│ │ ├── index.html
│ │ └── …
│ ├── src/
│ │ ├── components/ # React components (ChatWindow, MessageBubble, etc.)
│ │ ├── App.jsx
│ │ └── index.jsx
│ ├── package.json
│ ├── package-lock.json
└── README.md
git clone https://github.com/Thejesh-M/MyHealthAgent_Multi-Agent-Assistant.git
cd MyHealthAgent_Multi-Agent-Assistant-
Create a Python virtual environment (Python 3.8+ recommended):
python3 -m venv .venv source .venv/bin/activate # on macOS/Linux
-
Install Python dependencies:
pip install --upgrade pip pip install -r requirements.txt
-
Environment variables
Create a file named.envin the project root (or export environment vars in your shell). We will be usingUSDA FoodData Central APIfor extracting nutrients from food. You can create one from the provided link and set it in the.envfileUSDA_FOOD_API="your_USDA_API_KEY" PORT=5000 -
Run the Flask server:
export FLASK_APP=app.py # macOS/Linux flask run --host=0.0.0.0 --port=$PORT
Or simply:
python app.py
You should see something like:
* Serving Flask app "app.py" (lazy loading) * Environment: development * Debug mode: on * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit) -
Verify Backend Endpoints
GET /→ (if you added a root route).POST /http://localhost:5000/api/chat { "message": "Hi there" }→ Should return JSON withassistant_response,intent, etc.POST /http://localhost:5000/api/get_thresholds { "diseases": ["Diabetes","Hypertension"] }POST /http://localhost:5000/api/extract_diseases(form data:medical_image= JPEG/PNG,entered_text=some text)POST /http://localhost:5000/api/process_meal(form data:food_imageorentered_text).
-
Navigate into the frontend folder:
cd frontend -
Install Node packages (requires Node 16+ & npm 8+):
npm install
This will populate
node_modules/and create a lockfile (package-lock.json). -
Run in Development Mode:
npm start
This will launch a dev server at
http://localhost:3000(by default).
Open your browser tohttp://localhost:3000→ you should see the chat UI. -
Interact at
http://localhost:3000:- Post an image of a lab report (e.g.
blood-sugar-levels-example.jpg) and ask to extract diseases →api_extract_diseasesuses OCR + LLaMA prompt. - Type “Hi” → the intent classifier should go to
greeting_agent. - Upload a food photo (e.g.
pizza.jpg) →/api/process_mealwill classify and log. - Ask “What should I have for dinner?” →
meal_planning_agentproduces a bullet list.
- Post an image of a lab report (e.g.
Inside app.py, we maintain a small StateGraph that represents five “intent nodes”:
- START – initial state (no message yet).
- greeting – user is saying “hello” or small talk.
- food_logging – user is reporting what they have eaten (text or image).
- meal_planning – user is asking what they plan to eat in the future.
- health_advice – user is asking for general health/nutrition guidance.
- other – fallback for messages that don’t fit above.
At runtime:
current_node = state["current_node"] # e.g. “START” or “meal_planning”
# 1. Classify the new user message:
detected_intent = classify_intent(user_message)
# 2. Traverse graph: find an outgoing edge from `current_node` whose condition
# (classify_intent == dest) is True → that dest becomes next_node.
if no edge matches, next_node = "other"
# 3. state["current_node"] = next_node
# 4. Call _AGENT[next_node](state, user_message) → replyEach “agent” function (e.g. meal_logging_agent, planning_agent) takes the ChatState and the raw user text (or processed text+predicted food) and returns a string. We then sanitize and return it in JSON.
- Built with Flask and React.
- OCR powered by pytesseract.
- Food classification courtesy of a pretrained Food101 model (ResNet).
- Intent‐classification & disease extraction via langchain_ollama + a LLaMA‐based prompt.
- State management / multi‐agent routing via langgraph.
Made with ❤️ by Thejesh M