Skip to content

LibaMariyamK/clinical_ai_agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

26 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🩺 ClinixAI β€” Agentic Clinical Decision Support System

A self-correcting medical AI agent built with LangGraph, Groq, ChromaDB, and Streamlit. Combines Retrieval-Augmented Generation (RAG) from a medical knowledge base with automatic web search fallback, conversation memory, and adaptive multimodal vision analysis for medical imaging.


🌐 Live Demo πŸ‘‰ Open in Streamlit

Note: A free Groq API key is required to use the app. Takes 2 minutes to get.

ClinixAI Screenshot


🧠 How the Agent Works

The agent uses a self-correcting LangGraph pipeline with 4 sequential nodes:

Architecture Diagram


✨ Features

  • Self-Correcting RAG β€” Automatically falls back to web search when the knowledge base lacks relevant context
  • Query Rewriting β€” Resolves vague follow-up questions using conversation history before retrieval
  • Conversation Memory β€” Remembers previous turns within a session for contextual multi-turn dialogue
  • Adaptive Vision Analysis β€” Detects image type (Chest X-Ray, Brain MRI, CT Scan, Bone X-Ray, Skin, Ultrasound) and applies a specialist-level prompt for each
  • Page Number Citations β€” Every PDF-sourced answer shows exact page numbers from the encyclopedia
  • Structured Responses β€” All answers returned in a consistent clinical format
  • Medical-Grade Embeddings β€” Uses NeuML/pubmedbert-base-embeddings (trained on PubMed) instead of general-purpose embeddings for better medical retrieval accuracy
  • API Key per User β€” Each user enters their own free Groq key β€” no shared quota

πŸ—‚οΈ Project Structure

ClinixAI/
β”‚
β”œβ”€β”€ app.py                  # Streamlit UI β€” main entry point
β”œβ”€β”€ agent.py                # Streamlit wrapper with @st.cache_resource
β”œβ”€β”€ agent_core.py           # Pure agent logic β€” used by FastAPI
β”œβ”€β”€ api.py                  # FastAPI REST API (optional, for production use)
β”œβ”€β”€ retriever.py            # ChromaDB vector store + PDF loader
β”œβ”€β”€ evaluate.py             # RAGAS evaluation script
β”‚
β”œβ”€β”€ chroma_db_data/         # Auto-generated vector database (gitignored)
β”œβ”€β”€ .env                    # API keys (gitignored)
β”œβ”€β”€ requirements.txt        # Production dependencies
└── README.md

βš™οΈ Tech Stack

Component Technology
UI Streamlit
Agent Orchestration LangGraph
LLM β€” Reasoning Llama 3.3 70B via Groq
LLM β€” Vision Llama 4 Scout 17B via Groq
Embeddings NeuML/pubmedbert-base-embeddings (HuggingFace)
Vector Database ChromaDB
PDF Loader PyMuPDF
Web Search Fallback DuckDuckGo (langchain-community)
REST API FastAPI + Uvicorn
Evaluation RAGAS

πŸš€ Getting Started

1. Clone the repository

git clone https://github.com/YOUR_USERNAME/clinical-ai-agent.git
cd clinical-ai-agent

2. Create a virtual environment

python -m venv venv

# Windows
venv\Scripts\activate

# macOS / Linux
source venv/bin/activate

3. Install dependencies

pip install -r requirements.txt

4. Set up environment variables

Create a .env file in the root directory:

GROQ_API_KEY=your_groq_api_key_here

Get your free Groq API key at console.groq.com.

5. First run β€” knowledge base indexing

The PDF knowledge base is downloaded from Google Drive on first run and indexed into ChromaDB. This takes 3–8 minutes and only happens once. All subsequent runs load the existing database instantly.

6. Run the app

streamlit run app.py

πŸ”Œ FastAPI Backend (Optional)

The agent is also exposed as a REST API via FastAPI. Run both servers in separate terminals:

Terminal 1 β€” API server:

uvicorn api:app --host 0.0.0.0 --port 8000 --reload

Terminal 2 β€” Streamlit UI:

streamlit run app.py

Once the API is running, the Streamlit sidebar shows ⬀ API Online and all queries are routed through FastAPI.

API documentation is auto-generated at http://localhost:8000/docs

Endpoint Method Description
/ GET Health check
/health GET Health check
/query POST Text query with optional chat history
/query-with-image POST Vision + text query

πŸ–₯️ Usage

Action How
Ask a clinical question Type in the chat input and press Enter
Upload a medical image Use the sidebar file uploader (JPG/PNG)
Combined query Upload an image AND ask a question β€” both are processed together
Clear conversation Click πŸ—‘ Clear Conversation in the sidebar

Source badges

Every response shows where the answer came from:

Badge Meaning
πŸ“„ Knowledge Base β€” Pages X, Y Retrieved from Gale Encyclopedia of Medicine
🌐 Web Search β€” external source Knowledge base lacked context, DuckDuckGo was used

Supported image types

The vision model automatically detects and applies specialist prompts for:

Image Type Specialist Prompt Applied
Chest X-Ray Radiologist β€” lung fields, cardiac, mediastinum, diaphragm
Brain MRI Neuroradiologist β€” parenchyma, ventricles, white matter
CT Scan Radiologist β€” density, Hounsfield units
Bone X-Ray Musculoskeletal radiologist β€” fractures, joint spaces
Skin Dermatologist β€” ABCDE criteria
Ultrasound Sonographer β€” echogenicity, vascularity

πŸ“Š Evaluation

Evaluated on 5 clinical questions using a custom LLM-as-judge framework (Llama 3.3 70B):

Metric Score Meaning
Answer Faithfulness 0.85 βœ… Answers stay grounded in retrieved context
Answer Relevancy 0.80 βœ… Answers directly address the questions asked
Context Precision 0.60 ⚠ Retrieved chunks are relevant to the query
Overall Average 0.75

Evaluated using NeuML/pubmedbert-base-embeddings over the Gale Encyclopedia of Medicine (5 questions). Evaluation script: evaluate.py


πŸ”‘ Environment Variables

Variable Description Required
GROQ_API_KEY Your Groq API key for LLM inference βœ… Yes

⚠️ Disclaimer

This tool is intended as a decision aid for qualified healthcare professionals only. It does not replace professional clinical diagnosis, medical advice, or treatment decisions. Always consult a licensed medical professional for patient care.


πŸ“„ License

This project is licensed under the MIT License.


πŸ™ Acknowledgements

About

Self-correcting medical AI agent built with LangGraph & Groq. Uses RAG over the Gale Encyclopedia of Medicine with automatic web search fallback and multimodal vision analysis for X-Ray/MRI interpretation.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages