Preserving Indigenous languages through AI-powered interactive learning
Features โข Demo โข Getting Started โข Tech Stack โข Documentation
TurtleTalk is an innovative AI-powered platform designed to preserve and teach Indigenous languages across North America. The platform combines modern technology with cultural sensitivity to create an engaging, accessible learning experience through:
- ๐ฏ Interactive Duolingo-style exercises with real-time feedback
- ๐ค AI-powered pronunciation evaluation using Google Gemini
- ๐ Natural-sounding text-to-speech with VITS neural audio generation
- ๐ Comprehensive course system with progressive difficulty levels
- ๐ Community-driven content and cultural storytelling
- โฟ Accessibility-first design with screen reader support
Built for HackHive 2026, TurtleTalk demonstrates how technology can support Indigenous language revitalization while respecting cultural context and community needs. Currently showcasing Cree language education with plans to expand to additional Indigenous languages.
- Progressive Course System: Structured lessons from beginner to advanced levels
- Interactive Exercises: Multiple choice, translation, pronunciation, and listening comprehension
- Real-time Pronunciation Feedback: AI analyzes your pronunciation and provides specific improvement tips
- Audio-First Learning: High-quality Indigenous language audio recordings for authentic pronunciation
- Cultural Context: Stories and examples rooted in Indigenous cultures
- Gemini API Pronunciation Analysis: Advanced AI evaluates pronunciation accuracy (0-100 score)
- Personalized Feedback: Specific tips on pronunciation, intonation, and cultural nuances
- VITS Text-to-Speech: Neural network generates natural-sounding Indigenous language speech
- Smart Progress Tracking: Adaptive learning based on user performance
- Cultural Stories: Interactive storytelling in Cree with translations
- Language Exchange: Connect with native speakers and learners
- Community Forums: Discussion spaces for language and culture
- Translation Collaboration: Community-driven translation projects
- Cultural Events: Virtual and in-person Indigenous cultural events
- Screen Reader Support: Full ARIA compliance for visually impaired users
- Keyboard Navigation: Complete keyboard-only navigation support
- Adjustable Text Size: Customizable font sizes for better readability
- High Contrast Mode: Enhanced visibility options
- Audio Descriptions: Comprehensive audio feedback for all interactions
Students record themselves speaking Indigenous language phrases and receive instant AI feedback:
- Score: 0-100 pronunciation accuracy
- Feedback: Detailed analysis of strengths and areas for improvement
- Tips: Specific guidance on proper pronunciation techniques
- Cultural Notes: Context about the phrase and its cultural significance
Duolingo-style learning flow with:
- Multiple exercise types (listening, speaking, translation)
- Progress tracking with visual feedback
- Immediate correction and explanation
- Audio playback of correct pronunciation
- Node.js 18+ and npm
- Python 3.9+
- SQLite (included) or PostgreSQL
- Git
-
Clone the repository
git clone https://github.com/Mayalevich/TurtleTalk.git cd TurtleTalk -
Frontend Setup
cd frontend npm install npm startFrontend runs on
http://localhost:3000 -
Backend Setup (new terminal)
cd backend pip install -r requirements.txt # Set up environment variables cp .env.example .env # Edit .env with your Gemini API key # Run the server python run.py
Backend runs on
http://localhost:3001 -
Access the app Open
http://localhost:3000in your browser
Create a .env file in the backend/ directory:
# Database
DATABASE_URL=sqlite:///./turtletalk.db
# Google Gemini API
GEMINI_API_KEY=your_gemini_api_key_here
# JWT Secret
SECRET_KEY=your-secret-key-here- React 18 with TypeScript
- Material-UI v5 - Modern component library
- Web Audio API - Audio recording and playback
- MediaRecorder API - Browser-based audio capture
- React Router - Client-side routing
- Context API - State management
- FastAPI - High-performance Python web framework
- SQLAlchemy - ORM for database operations
- SQLite/PostgreSQL - Relational database
- Google Gemini API - AI pronunciation evaluation
- JWT Authentication - Secure user sessions
- Pydantic - Data validation
- VITS - Neural text-to-speech for Indigenous languages
- Google Gemini 1.5 Flash - Multimodal AI for pronunciation analysis
- Whisper AI (Planned) - Conversational AI tutor for interactive dialogue
- Speech-to-Text (Prototype) - Voice recognition for Indigenous languages
- Audio Processing - WAV file generation and manipulation
- Speech-to-Text Model - Completed prototype for voice recognition
- Designed for Indigenous language phonetics
- Integration planned for future releases
- Enables conversational practice features
- Git - Version control
- GitHub - Code hosting
- npm - Package management
- pip - Python package management
- SETUP.md - Development environment setup guide
- COMMUNITY_DEMO_GUIDE.md - Hackathon demo instructions
- ARCHITECTURE.md - System design and architecture
- TECHNICAL_SPEC.md - API contracts and specifications
Our platform leverages these valuable Cree language resources for learning content and model training:
- itwรชwina Dictionary - Comprehensive Plains Cree dictionary with audio pronunciations
- Indigenous Languages Corpora - Text corpora for Indigenous language processing
- Cree Language Podcast - Plains Cree language learning podcast
TurtleTalk/
โโโ frontend/ # React TypeScript application
โ โโโ public/
โ โ โโโ audio/ # Indigenous language audio files (VITS-generated)
โ โ โโโ images/ # Assets and course thumbnails
โ โโโ src/
โ โโโ components/ # React components
โ โโโ contexts/ # React context providers
โ โโโ services/ # API services
โ โโโ types/ # TypeScript definitions
โ
โโโ backend/ # FastAPI Python server
โ โโโ app/
โ โ โโโ models/ # Database models
โ โ โโโ routes/ # API endpoints
โ โ โโโ schemas/ # Pydantic schemas
โ โ โโโ services/ # Business logic
โ โโโ alembic/ # Database migrations
โ โโโ scripts/ # Utility scripts
โ
โโโ ml-service/ # Machine learning services
โ โโโ vits_main.py # VITS TTS service
โ โโโ config.json # ML model configuration
โ
โโโ tests/ # Integration tests
โโโ integration/ # API contract tests
- User authentication and profile management
- Course module system with progressive lessons
- Interactive pronunciation exercises
- AI pronunciation evaluation with Gemini API
- Audio recording and playback
- Real-time feedback with 4-second display
- VITS text-to-speech for Indigenous languages
- Community forums and discussion spaces
- Cultural storytelling section
- Accessibility features (screen reader, keyboard nav)
- Progress tracking and achievements
- SQLite database with migration support
- Full Gemini API integration for context-aware feedback
- PostgreSQL production database setup
- Advanced progress analytics dashboard
- Mobile responsive design enhancements
- Offline mode with service workers
- Whisper AI Integration - Conversational AI tutor with natural dialogue
- Speech-to-Text Model - Real-time voice recognition (prototype completed)
- Interactive Voice Conversations - Back-and-forth dialogue practice with AI tutor
- Additional Indigenous languages support (Ojibwe, Mohawk, Inuktitut, etc.)
- Mobile native apps (iOS/Android)
- Live video sessions with native speakers
- Gamification with leaderboards
- AI-generated personalized learning paths
- Voice recognition for conversational practice
We welcome contributions from developers, linguists, and Indigenous community members! Please see our contributing guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
- Follow TypeScript and Python best practices
- Write tests for new features
- Respect cultural sensitivity in language content
- Ensure accessibility compliance
- Document API changes
This project is licensed under the MIT License - see the LICENSE file for details.
Built with โค๏ธ for HackHive 2026
- Project Lead & Full Stack Development
- AI/ML Integration
- Cultural Consultation & Content
- Indigenous Language Elders for their guidance and cultural knowledge
- HackHive 2026 for hosting the hackathon
- Google Gemini API for AI pronunciation evaluation
- OpenAI Whisper for speech recognition capabilities
- VITS Team for the text-to-speech model
- Indigenous communities for their support and feedback
Made with ๐ข for Indigenous language preservation
