A simple medical image analysis chatbot interface for the LLaVA-Med custom RAG implimentation for medical imaging.
https://llava-medrag.github.io/
- Node.js 18+ and npm
- FastAPI backend (optional, for full functionality)
# Clone the repository
git clone https://github.com/LLaVA-MedRAG/LLaVA-MedRAG.github.io.git
cd LLaVA-MedRAG.github.io
# Install dependencies
npm installnpm run devAccess at http://localhost:5173
npm run deploy- Update configuration in
vite.config.ts:
export default defineConfig({
plugins: [react()],
base: '/', // Remove GitHub Pages base path
server: {
host: '0.0.0.0',
port: 5173
}
})- Build and serve:
npm run build
npm install -g serve
serve -s dist -p 5173- Or with PM2 (recommended for production):
npm install -g pm2
npm run build
pm2 serve dist 5173 --name llava-medrag
pm2 saveCreate Dockerfile:
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
RUN npm install -g serve
EXPOSE 5173
CMD ["serve", "-s", "dist", "-p", "5173"]Deploy:
docker build -t llava-medrag .
docker run -d -p 5173:5173 llava-medragUpdate API endpoints in src/App.tsx:
// Line ~70 and ~123
const response = await fetch('http://YOUR_SERVER:8000/status');
const response = await fetch('http://YOUR_SERVER:8000/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(requestBody)
});GET /status - Health check
{ "status": "online" }POST /chat - Process chat message
{
"mode": "Auto | BrainMRI | ChestX-ray | Histopathology",
"message": { "text": "...", "image": "base64 or null", "timestamp": "..." },
"history": { "messages": [...], "images": [...] },
"chat_id": "..."
}src/
├── components/
│ ├── TopBar.tsx # Status bar with online indicator
│ ├── Sidebar.tsx # Chat list and mode selector
│ └── ChatWindow.tsx # Main chat interface
├── App.tsx # Application logic
└── main.tsx # Entry point
MIT License
Hasitha Gallella - GitHub
Built with React + TypeScript + Vite