AI tooling for OpenEMR: ambient listening, note summarization, and automated coding.
This project implements a SMART on FHIR application that integrates with OpenEMR. The frontend authenticates via OAuth2, retrieves patient context from the FHIR API, and sends audio + clinical data to backend services for processing.
Key flow:
- Clinician launches app from OpenEMR patient dashboard
- SMART on FHIR OAuth2 authorization with patient context
- Browser captures ambient audio during clinical encounter
- Audio transcribed to text via ASR service
- Transcript + FHIR data (vitals, labs, conditions, meds) sent to summarization
- RAG-enhanced model generates SOAP note using disease-specific schemas
smart-ambient-listening/
├── frontend/ # SMART on FHIR app - launches from OpenEMR, handles OAuth2, captures audio
├── transcription-service/ # Speech-to-text API gateway, calls Modal GPU for ASR inference
└── rag-text-summarization/ # SOAP note generation - retrieves relevant schemas from vector DB, generates clinical summary
| Service | Model | Status |
|---|---|---|
| ASR (Speech-to-Text) | [TBD] |
Placeholder |
| Summarization | [TBD] |
Placeholder |
| RAG Embeddings | [TBD] |
Placeholder |
See smart-ambient-listening/README.md
MPL-2.0 license