Ask a question. Watch the web teach you.
Image disclaimer: The banner above was generated using an AI image generation tool for presentation purposes only.
ThinkOva is a visual learning tool that takes any topic you type, opens a real browser, navigates the web automatically, and narrates what it's doing — step by step, out loud.
No links. No walls of text. Just watch and listen.
- You type a topic and hit search
- An AI agent (Nova LLM on AWS Bedrock) breaks it into a step-by-step plan
- Nova Act executes each step in a real headless browser
- Live screenshots stream to your screen at 10fps
- ElevenLabs reads the narration out loud for each step
- The next step only starts after the audio finishes — so you never fall behind
| What | Why |
|---|---|
| Next.js | Frontend — search bar, live canvas, WebSocket |
| FastAPI + Redis | Backend — sessions, orchestration |
| AWS Bedrock (Nova LLM) | Generates the step plan |
| Nova Act | Drives the browser, streams screenshots |
| ElevenLabs TTS | Narrates each step out loud |
cd apps/backend
cp .env.example .env # add your keys
docker-compose upcd apps/frontend
npm installCreate apps/frontend/.env.local:
NEXT_PUBLIC_ELEVENLABS_API_KEY=sk_yourkey
NEXT_PUBLIC_VOICE_ID=JBFqnCBsd6RMkjVDRZzb
NEXT_PUBLIC_WS_URL=ws://localhost:8080npm run devBackend (apps/backend/.env)
NOVA_ACT_API_KEY=
AGENT_RUNTIME_ARN=
AWS_REGION=
REDIS_URL=
LOCAL_DEV=1Frontend (apps/frontend/.env.local)
NEXT_PUBLIC_ELEVENLABS_API_KEY=
NEXT_PUBLIC_VOICE_ID=
NEXT_PUBLIC_WS_URL=Never commit either of these files. Both are gitignored.
visualizer/
├── apps/
│ ├── agent/ # Nova LLM agent + prompts
│ ├── backend/ # FastAPI + Redis + Nova runner
│ └── frontend/ # Next.js app + ElevenLabs TTS
Nova Act · AWS Bedrock · ElevenLabs · Next.js · FastAPI

