AesculTwin is a SMART on FHIR application designed to be the "Digital Twin" and "Second Brain" for cardiovascular surgeons. It bridges the gap between raw EHR data, global clinical guidelines, and the surgeon's own personal experience.
| Component | Status | Progress |
|---|---|---|
| Backend | ✅ Complete | 100% |
| Mobile App | 🔨 In Progress | ~75% |
| Web Portal | 🔨 In Progress | ~60% UI |
Current Phase: Phase 7 - Web Portal & FHIR Integration
"Your Cognitive Exoskeleton in the OR."
Surgeons accumulate vast amounts of experience ("Pearls") that are often lost or siloed in mental notes. AesculTwin captures these insights, combines them with gold-standard evidence (ACC/AHA Guidelines), and applies them proactively to live patient data.
3-tier architecture: Client Applications → Backend Services → Data Layer
| Platform | Mode | Use Case |
|---|---|---|
| Mobile App | Work Mode | In the OR, quick access, voice-enabled, offline-ready |
| Web Portal | Learning Mode | At desk, deep analysis, video review, performance tracking |
Hybrid search: Vector similarity + Keyword matching → RRF Fusion → AI-grounded answers with citations
Interactive graph showing relationships between guidelines, procedures, and outcomes
- Epic OAuth: SMART on FHIR OAuth 2.0 flow implemented
- Data Ingestion: FHIR R4 resources (Patient, Observation, Condition, MedicationRequest)
- Safety Brief: Automated risk analysis (STS/EuroSCORE) on patient load
- 6,400+ clinical guideline embeddings (ACC/AHA/ESC/STS)
- Dual-source search: Guidelines + surgeon's personal notes
- Hybrid search: Vector similarity + keyword with RRF ranking
- Evidence-grounded: Every AI response includes citations
- Conversational memory: Multi-turn dialogue with context
- RAG citations: Every response grounded in evidence
- Follow-up questions: Contextual suggestions for deeper exploration
- Confidence scoring: Strong/Moderate/Weak evidence indicators
- Conversational AI Chat: ChatGPT-style knowledge Q&A
- Learning Feed: AI-curated medical journal updates
- Video Learning: Full-featured surgical video platform
- Upload videos for AI analysis with Gemini 2.0 Flash
- Auto-generated thumbnails (ffmpeg)
- AI surgical phase detection and key moment identification
- Spatial annotations with x,y coordinates
- Enhanced player: theater mode, fullscreen, variable speed (0.25x-2x)
- Frame-by-frame stepping (comma/period keys)
- A/B loop for repeating sections
- Keyboard shortcuts (Space, M, N, F, T, L, arrows)
- Video-specific Q&A with Gemini
- PostgreSQL-backed user annotations
- Performance Analytics: Regional & national benchmarks
- Knowledge Graph: Visual exploration of relationships
5. Enterprise-Grade Security (HIPAA Compliance)
- PHI Tokenization: De-identify before LLM processing (AES-256-GCM)
- JWT Authentication: 15min access + 7day refresh tokens
- Biometric Auth: Face ID / Touch ID
- Audit Logging: Append-only HIPAA-compliant trail
| Layer | Technology |
|---|---|
| Mobile | React Native + Expo 54 |
| Web | Next.js 15 + React |
| Backend | Fastify 5.x + TypeScript |
| Database | PostgreSQL 16 + pgvector 0.8.1 |
| AI | Google Gemini 2.0 Flash |
| Embeddings | text-embedding-004 (768-dim) |
# 1. Start PostgreSQL
docker-compose up -d
# 2. Backend (port 3000)
cd server && npm install && npm run dev
# 3. Mobile app
cd app && npm install && npx expo start
# 4. Web portal (port 3001)
cd web-portal && npm install && npm run dev- Email:
test@aescultwin.com - Password:
TestPassword123
- Swagger UI: http://localhost:3000/docs
- Health Check: http://localhost:3000/health
/AesculTwin
├── /app # Expo React Native mobile app (10 screens)
├── /server # Fastify backend API (30+ endpoints)
├── /web-portal # Next.js web app (11 pages)
└── /docs # Documentation
| Document | Description |
|---|---|
| ARCHITECTURE.md | System design & data flows |
| DATABASE.md | Schema reference (14 migrations) |
| API.md | API endpoint documentation |
| HIPAA_COMPLIANCE.md | Security & PHI protection |
| APP_DESIGN.md | Mobile app screens & components |
| ROADMAP.md | Development phases & timeline |
| TODO.md | Current tasks & GitHub issues |
| CHANGELOG.md | Version history & release notes |
| SECOND_BRAIN_ARCHITECTURE.md | Future vision & research |
See TODO.md for detailed task tracking and GitHub Issues for production implementation backlog.
- ✅ 40+ REST API endpoints
- ✅ 6,400+ guideline embeddings ingested
- ✅ Mobile app with 10 screens
- ✅ Web portal with 11 pages
- ✅ Hybrid RAG search with RRF fusion
- ✅ AI chat with evidence citations
- ✅ Video analysis with Gemini 2.0 Flash
- ✅ Enhanced video player with spatial annotations
- 🔄 Epic sandbox integration (in progress)
- 🔄 Production data connections (GitHub Issues #9-18)
- DICOM Integration: View Echo/Cath clips in the Twin Dashboard
- Video Analysis: AI lesion analysis from Angio clips
- Team Handoff: QR codes for context transfer to residents
- Offline Mode: Local cache with background sync
- Push Notifications: Clinical alerts & milestones
See ROADMAP.md for full development timeline.



