The Lung Auscultation Assessment System (LAAS) is a project I originally developed in 2022.
It combines augmented reality guidance with machine learning models to help users perform lung sound auscultation more accurately and consistently.
- AR Guidance: Step-by-step overlay to position the stethoscope correctly.
- Machine Learning Integration: CNN-based model to analyze lung sounds.
- Accessibility Focus: Low-cost, user-friendly design for preventive healthcare.
- Frontend: Swift / SwiftUI (AR integration)
- ML Model: PyTorch
- AR: Apple ARKit overlays
- Backend/Deployment: AWS (prototype for model serving)
- macOS with Xcode 14+
- iOS device (iPhone/iPad) with iOS 15+
- Python 3.9+
- PyTorch installed
- AWS CLI (optional, for model serving)
# Clone the repo
git clone https://github.com/YOUR_USERNAME/LAAS.git
cd LAAS- Navigate to
/ios-app/ - Open
LAAS.xcodeprojorLAAS.xcworkspacein Xcode - Run on a physical iOS device (ARKit not supported in simulator)
cd ml-model
pip install -r requirements.txt
python predict.py --input sample_sound.wav- Upload model to S3
- Deploy with Flask/FastAPI
- Update app config with API endpoint
- Retrain ML model with larger, more diverse datasets
- Enhance AR overlay precision
- Deploy as a testable iOS app prototype
Created by Sophie Lin

