Your Virtual Tutor - A beautiful, privacy-first AI chat application built with Next.js, TypeScript, and Hugging Face that provides educational assistance!
- π€ AI-Powered Tutoring - Advanced Qwen model for intelligent educational assistance
- π Multilingual Support - English and Bengali language processing
- π€ Voice Interaction - Text-to-speech and speech-to-text capabilities
- π Grade-Specific Learning - Tailored responses for Classes 1-12
- π¨ Beautiful Modern UI - Glassmorphic design with smooth animations
- π Production Ready - Built with Next.js 14, TypeScript, and modern web standards
Shikbo AI is dedicated to empowering students in Bangladesh by providing accessible, AI-powered educational assistance. Our goal is to foster learning through intelligent, multilingual tutoring that adapts to different grade levels and languages.
- Next.js 16 - React framework with App Router and Server Components
- TypeScript - Type-safe development
- Hugging Face Inference - Qwen 2.5-7B multilingual AI model
- HeroUI - Beautiful React components
- Tailwind CSS - Utility-first CSS framework
- Framer Motion - Smooth animations
- React Markdown - Markdown rendering
- Bun - Fast JavaScript runtime and package manager
- Node.js 18+ (or a compatible version)
- A package manager: npm, yarn, or pnpm
-
Clone the repository:
git clone https://github.com/your-username/shikbo-ai.git cd shikbo-ai -
Install dependencies (choose your package manager):
# Using pnpm pnpm install # Using npm npm install # Using yarn yarn install
-
Start the development server:
# Using pnpm pnpm run dev # Using npm npm run dev # Using yarn yarn dev
-
Open your browser
Visit http://localhost:3000 to start chatting!
-
Build for production
npm run build
-
Start production server
npm start
-
Deploy to Vercel (recommended)
npx vercel
- Select an AI Model - Choose from the dropdown menu in the header based on your needs
- Start Chatting - Type your message in the input field
- First-Time Loading - Models download automatically on first use (cached afterward)
- Switch Models - Change models anytime to get different response styles
- Educational Focus - Ask questions, request explanations, or get help with learning
| Model | Best For | Size | Speed |
|---|---|---|---|
| DistilGPT-2 | General text generation, fast responses | ~300MB | ββββ |
| Flan-T5 Small | Educational Q&A, explanations, study help | ~200MB | βββ |
| DialoGPT Small | Interactive discussions, tutoring | ~350MB | ββ |
Note: All models are free and run entirely in your browser. Download size is one-time only. Models are cached permanently.
- Code Splitting - Optimized bundle sizes with Next.js
- Model Caching - Persistent model storage across sessions
- Lazy Loading - Dynamic imports for better initial load times
- Memory Management - Conversation limits with optional cache clearing
- Zero Data Collection - No user data ever leaves your device
- Content Security Policy - Security headers for XSS protection
- HTTPS Ready - SSL/TLS support for production deployments
- No External APIs - Complete independence from third-party services
- Meta Tags - Complete Open Graph and Twitter Card support
- Sitemap - Auto-generated sitemap.xml
- Robots.txt - Search engine optimization
- Responsive Design - Works on all devices and screen sizes
ai-chat-assistant/
βββ src/
β βββ app/ # Next.js App Router
β β βββ globals.css # Global styles
β β βββ layout.tsx # Root layout with metadata
β β βββ page.tsx # Home page
β β βββ robots.ts # SEO robots.txt
β β βββ sitemap.ts # SEO sitemap
β βββ components/ # React components
β β βββ ChatInterface.tsx
β βββ lib/ # Utilities and APIs
β β βββ api.ts # Transformers.js integration
β β βββ utils.ts # Helper functions
β βββ types/ # TypeScript definitions
β βββ index.ts
βββ public/ # Static assets
βββ .env.example # Environment variables template
βββ next.config.js # Next.js configuration
βββ tailwind.config.js # Tailwind CSS configuration
βββ package.json # Dependencies and scripts
Create .env.local for optional features:
# Optional: Google Analytics
NEXT_PUBLIC_ANALYTICS_ID=your_analytics_id
# Optional: App URL for production
NEXT_PUBLIC_APP_URL=https://your-domain.com
# Optional: Google Site Verification
GOOGLE_SITE_VERIFICATION=your_verification_codeThe app includes production-optimized configuration:
// next.config.js
const nextConfig = {
reactStrictMode: true,
swcMinify: true,
experimental: {
serverComponentsExternalPackages: [
"@huggingface/transformers",
"onnxruntime-web",
],
},
// ... additional webpack configuration for Transformers.js
};- Connect your GitHub repository to Vercel
- Set any optional environment variables
- Deploy with automatic builds on push
The app works on any platform supporting Node.js:
- Netlify - Static export compatible
- AWS Amplify - Full SSR support
- Docker - Use provided Dockerfile
- Self-hosted - Standard Node.js deployment
- Add model configuration to
src/lib/api.ts - Update model selection in
availableModelsarray - Test model loading and response formatting
- Modify
src/app/globals.cssfor global styles - Update
tailwind.config.jsfor design system changes - Edit components in
src/components/for UI modifications
This project is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0) - see the LICENSE file for details.
- Hugging Face - For Transformers.js and model ecosystem
- Xenova - For ONNX model conversions and Transformers.js
- Vercel - For Next.js framework and deployment platform
- The Open Source Community - For making projects like this possible
- Initial Load: ~2-3 seconds for app initialization
- Model Download: 200MB-500MB per model (one-time only)
- Response Time: 1-5 seconds depending on model size and device
- Memory Usage: 200MB-1GB depending on loaded models
- Browser Support: Chrome 88+, Firefox 78+, Safari 14+
- Documentation: Check our Wiki
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Made with β€οΈ by developers who believe in privacy-first AI