A context-aware AI digital health assistant that helps college students fight doomscrolling through intelligent, graded interventions.
College students lose hours daily to addictive apps. Existing screen time tools use rigid, one-size-fits-all limits that get ignored. They don't understand context — scrolling TikTok for 20 minutes on a Saturday morning is fine; doing it at 2 AM before an exam is not.
Dopamine AI is a context-aware digital health guardian. It considers what you're doing, when you're doing it, and what's coming up — then decides whether to leave you alone, nudge you, or block the app entirely.
📡 Sense → 🧠 Think → ⚡ Act
Collect context AI risk assessment Graded intervention
(app, time, calendar, (LLM evaluates 0-3 (notification, popup,
deadlines, duration) using decision matrix) or app block)
| Level | Action | When |
|---|---|---|
| 0 | Pass | Using Google Docs during the day — keep going. |
| 1 | Gentle Notification | 30 min on YouTube, no urgent deadlines. |
| 2 | Vibrate + Popup | Scrolling Instagram 15 min before lecture. |
| 3 | App Block | 2 AM, exam in 6 hours, 90 min on TikTok. |
Users sign in with Google, which also grants access to their Google Calendar for context-aware scheduling.
Once set up, Dopamine AI runs in the background. When the user opens a distracting app, the Sense layer collects context:
- Current app and how long they've been on it
- Next calendar event and countdown
- Upcoming deadlines from Canvas
- Time of day — weekday night vs. weekend afternoon
Based on context, the AI decides the right level of intervention:
Level 1 — Gentle Nudge:
Level 2 — Popup Warning:
Level 3 — App Blocked:
We built a Streamlit-based debug console to demonstrate and test the full pipeline without needing a phone.
conda activate aiHacks
streamlit run debug_ui.pyThe main demo tab. Use sliders and inputs to simulate any scenario, then watch the AI evaluate it in real time.
Step 1 — Set the context (Sense):
Use the left panel to configure:
- Which app the user is on (TikTok, Valorant, Google Docs, etc.)
- How long they've been using it
- Next calendar event and deadline
- Current time and whether it's a weekend
Or click a preset scenario button to load a typical situation instantly.
Step 2 — Run the pipeline:
Click "Run Pipeline". The AI evaluates the context and returns:
- Risk Level (0-3)
- Message to the user (in Tony's tough-love style)
- Reasoning for why it chose that level
Step 3 — Compare scenarios:
Change the parameters and run again. For example:
- Move "Continuous Minutes" from 20 → 90: watch the level go up
- Switch app from TikTok → Google Docs: watch the level drop
- Set hour to 2 AM + close deadline: level jumps to 3
Chat directly with Tony (the AI persona) to test conversational responses.
Edit the system prompt that drives the Think layer, and A/B test two different prompts on the same scenario side-by-side.
┌─────────────────────────────────────┐
│ Flutter App (iOS/Android) │
│ ┌──────────┐ ┌─────────────────┐ │
│ │ Google │ │ Screen Time │ │
│ │ Sign-In │ │ Monitoring │ │
│ └────┬─────┘ └───────┬─────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────────────┐ │
│ │ Firebase Cloud Messaging │◄───┤── receives interventions
│ └─────────────────────────────┘ │
└──────────────┬──────────────────────┘
│ HTTPS (Cloud Function call)
▼
┌──────────────────────────────────────┐
│ Firebase Cloud Functions (Python) │
│ │
│ sense.py → think.py → act.py │
│ (context) (LLM eval) (push) │
│ │
│ wingman.py (pipeline orchestrator) │
└──────────────────────────────────────┘
│
▼
┌──────────────────────────────────────┐
│ Oracle GenAI (LLM Inference) │
│ Llama 3.3-70b / Cohere Command R+ │
└──────────────────────────────────────┘
| Layer | Technology |
|---|---|
| Frontend | Flutter (iOS/Android) |
| Backend | Firebase Cloud Functions (Python) |
| AI Model | Oracle GenAI (Llama 3.3-70b, Cohere Command R+) |
| Push Notifications | Firebase Cloud Messaging |
| Auth & Calendar | Google Sign-In + Calendar API |
| Debug UI | Streamlit |
aiHacks/
├── .env # API keys (Oracle GenAI)
├── wingman.py # Cloud Functions entry — orchestrates the pipeline
├── sense.py # Sense layer — context gathering & mock data
├── think.py # Think layer — LLM risk evaluation with decision matrix
├── act.py # Act layer — graded intervention execution
└── debug_ui.py # Streamlit debug console for demo & testing