Enhance the Wolt user experience with natural-language voice interaction.
This project integrates an AI-powered voice assistant into the Wolt app (or Wolt-like mock UI), enabling users to search restaurants, order food, navigate the app, and interact with content using voice commands.
Demo version avalable here -> https://junction-challenge.vercel.app/
Presentation can be found from here https://tinyurl.com/Yuho-Ai-junction
- Record user speech directly in app (browser in demo version)
- Transcribe audio using AI (e.g., Whisper, Deepgram, or built-in STT)
- Process natural-language intent with an LLM
- Respond using on-screen text and optional text-to-speech
-
Works on top of a simulated Wolt-style interface
-
Supports search flows such as:
- “Find sushi places nearby”
- “Show me vegetarian burgers”
- “Open the cart”
- “Track my order”
- Voice recorder
- Transcription handler
- LLM request module
- UI command interpreter
- Response renderer
- Vite + React + TypeScript
- Web Audio API for recording
- OpenAI / custom LLM API for reasoning
- Tailwind or CSS modules for styling (depending on project)
ai-assistant-wolt/
│
├── src/
│ ├── App.tsx # Main UI + assistant logic
│ ├── main.tsx # React entry point
│ ├── components/ # UI + assistant components
│ ├── assets/ # Icons, audio waves, images
│ ├── styles/ # Global stylesheets
│ ├── guidelines/ # AI behaviour + prompt guidelines
│ ├── Attributions.md # Credits for assets
│
├── index.html
├── vite.config.ts
├── package.json
└── README.md # (This file)
git clone https://github.com/ristoxxx/JunctionChallenge.git
cd <repo-name>npm installCreate a file named .env in the project root:
VITE_OPENAI_API_KEY=your_api_key_here
(Optional depending on your LLM provider.)
npm run devThe assistant listens using the Web Audio API.
Audio is sent to an STT model for accurate transcription.
The assistant receives the user's words, interprets intent, and turns it into Wolt UI actions.
Commands are mapped to UI actions:
- Navigate → pages
- Query → restaurant data
- Filter → categories
- Respond → voice or text
Example dialogue:
User: “Find a spicy ramen place.” Assistant: Searches for ramen & applies spicy filter.
![]() |
![]() |
![]() |
![]() |
To build for production:
npm run buildThen host dist/ on:
- Vercel
- Netlify
- Cloudflare Pages
- GitHub Pages
You can add Jest, Vitest, or React Testing Library for unit tests.
Pull requests are welcome. For major changes, please open an issue first to discuss the update.
MIT License. Feel free to use, modify, and distribute.
This is a code bundle for AI Assistant Integration. The original project is available at https://www.figma.com/design/QO65lMt9iaB3JTytmYHXb1/AI-Assistant-Integration.



