Skip to content
/ Beaker Public

Execute scientific laboratory protocols hands-free using voice commands and on-device AI

License

Notifications You must be signed in to change notification settings

sskarz/Beaker

Repository files navigation

Beaker Logo

Beaker

Voice-Guided Lab Protocol Execution with On-Device AI

2nd Place Winner

Beaker is a React Native mobile app that helps scientists execute laboratory protocols hands-free using voice commands and on-device AI. All inference runs locally on the device via ExecuTorch — no cloud, no latency, full privacy.

Demo

Watch the full demo on Loom

How It Works

  1. Select a protocol — Choose from built-in protocols (PCR, ELISA, Plasmid Miniprep) or load your own
  2. Start execution — The AI reads out the current step via text-to-speech
  3. Work hands-free — Say "next step" or "go back" to navigate, or ask questions about the current step
  4. Get instant answers — The on-device LLM answers protocol-related questions in real time
  5. Track your runs — Notes, step completion, and run history are all persisted locally

Key Features

  • Fully on-device AI — LLM inference (Llama 3.2 1B) and speech-to-text (Whisper Tiny) run entirely on the device via ExecuTorch. No internet required during protocol execution.
  • Voice-first interaction — Hands-free operation through speech recognition and text-to-speech. Critical when you're wearing gloves or handling reagents.
  • Protocol management — Structured protocol format with steps, substeps, estimated durations, and versioning. Comes with real lab protocols out of the box.
  • Run history & notes — Every protocol run is logged with timestamps, step completion status, and per-step notes.
  • Tool calling — LLM can execute device actions (calendar events, brightness control) through a structured tool-calling interface.

Architecture

┌─────────────────────────────────────────────────────┐
│                    React Native App                  │
│                   (Expo + expo-router)               │
├─────────────────────────────────────────────────────┤
│  Voice Loop                                         │
│  ┌──────────┐   ┌──────────┐   ┌────────────────┐  │
│  │  Record   │──▶│ Whisper  │──▶│ Intent         │  │
│  │  Audio    │   │ STT      │   │ Detection      │  │
│  └──────────┘   └──────────┘   └───────┬────────┘  │
│                                    ┌────┴────┐      │
│                               Navigation   Q&A      │
│                               (next/back)  │        │
│                                    │    ┌──▼──┐     │
│                                    │    │ LLM │     │
│                                    │    └──┬──┘     │
│                                    ▼       ▼        │
│                              ┌──────────────────┐   │
│                              │   TTS Response   │   │
│                              └──────────────────┘   │
├─────────────────────────────────────────────────────┤
│  State (Zustand + AsyncStorage)                     │
│  Protocols │ Runs │ Notes │ Voice Chat History      │
├─────────────────────────────────────────────────────┤
│  react-native-executorch                            │
│  Llama 3.2 1B SpinQuant  │  Whisper Tiny EN        │
└─────────────────────────────────────────────────────┘

Models

Model Purpose Runtime
Llama 3.2 1B SpinQuant Chat, Q&A, tool calling ExecuTorch (.pte)
Whisper Tiny EN Speech-to-text ExecuTorch (.pte)
OS TTS Text-to-speech expo-speech (native)

Built-in Protocols

Protocol Steps Est. Duration
Basic PCR Amplification 3 ~135 min
ELISA Assay 2 ~90 min
Plasmid Miniprep (NEB #T1110) 10 ~29 min

Tech Stack

  • React Native 0.79 with New Architecture enabled
  • Expo 53 + expo-router for file-based routing
  • react-native-executorch for on-device LLM and STT inference
  • react-native-audio-api for audio recording
  • Zustand for state management with AsyncStorage persistence
  • TypeScript throughout

Getting Started

Prerequisites

  • Node.js 18+
  • Yarn
  • Xcode 15+ (iOS)
  • A physical iOS device (ARM64 — simulator not supported for ExecuTorch models)
  • iOS 15.1+

Installation

# Clone the repository
git clone https://github.com/sskarz/beaker_executorch.git
cd beaker_executorch

# Install dependencies
yarn install

# Install iOS native dependencies
cd ios && pod install && cd ..

Running

# Start the dev server
yarn start

# Build and run on a physical iOS device
yarn ios

Note: The first launch will download the LLM and Whisper model files (~1-2 GB). A progress indicator is shown during download. Subsequent launches use the cached models.

Development Commands

yarn start        # Start Expo dev server
yarn ios          # Build & run on iOS
yarn android      # Build & run on Android
yarn typecheck    # Run TypeScript type checking
yarn lint         # Run ESLint with auto-fix

Project Structure

app/
├── _layout.tsx                    # Root drawer navigation
├── index.tsx                      # Home screen
├── (protocols)/
│   ├── select.tsx                 # Protocol selection
│   ├── executing/[protocolId].tsx # Protocol execution with voice
│   └── log.tsx                    # Run history
├── voice_chat/                    # Voice chat demo
├── llm/                           # Basic LLM chat demo
├── llm_tool_calling/              # Tool calling demo
└── llm_structured_output/         # Structured output demo

src/
├── components/
│   ├── VoiceControl.tsx           # Mic button & recording UI
│   └── VoiceChatHistory.tsx       # Chat history display
├── hooks/
│   └── useVoiceChat.ts            # Voice chat state & logic
├── state/
│   ├── protocols.ts               # Protocol store (Zustand)
│   ├── runs.ts                    # Run tracking store
│   ├── notes.ts                   # Notes store
│   └── storage.ts                 # AsyncStorage adapter
├── types/
│   └── index.ts                   # TypeScript interfaces
└── utils/
    └── llmContext.ts              # LLM prompt utilities

utils/
├── tools.ts                       # Tool calling (calendar, brightness)
└── protocolTools.ts               # Protocol navigation tools

data/protocols/                    # Sample protocol JSON files

Voice Commands

During protocol execution, you can say:

Command Action
"next step", "continue", "proceed" Advance to the next step
"previous step", "go back" Return to the previous step
Any question Ask the LLM about the current step

Team

Built by sskarz at Meta's ExecuTorch Hackathon.

License

This project is part of the react-native-executorch ecosystem.

About

Execute scientific laboratory protocols hands-free using voice commands and on-device AI

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 5