AI-powered data analytics dashboard for intelligent data exploration and visualization
- 🤖 AI-Powered Analytics — Natural language chat interface for data exploration
- 🧹 Intelligent Data Cleaning — AI-generated suggestions for data quality improvement
- 📈 Dynamic Visualizations — Interactive charts with Plotly (bar, line, pie, scatter, etc.)
- 📑 Custom Reports — Build and export customizable dashboards as PDF
- 🔄 Multi-LLM Support — Choose between Google Gemini or OpenAI
- 📊 Data Overview — Automatic statistics, data types, and quality metrics
- 🌓 Dark/Light Mode — Seamless theme switching with next-themes
The fastest way to get started is with Docker. Our published images are ready to use.
version: "3.9"
services:
backend:
image: anandavii/aianalytico-backend:1.1.0
container_name: aianalytico-backend
ports:
- "8000:8000"
environment:
LLM_PROVIDER: ${LLM_PROVIDER}
GEMINI_API_KEY: ${GEMINI_API_KEY}
OPENAI_API_KEY: ${OPENAI_API_KEY}
restart: unless-stopped
frontend:
image: anandavii/aianalytico-frontend:1.1.0
container_name: aianalytico-frontend
ports:
- "3000:3000"
depends_on:
- backend
restart: unless-stopped# Choose your LLM provider: gemini or openai
LLM_PROVIDER=gemini
# API Keys (add the one you're using)
GEMINI_API_KEY=your_gemini_api_key_here
OPENAI_API_KEY=your_openai_api_key_heredocker compose up -d| Service | URL |
|---|---|
| Frontend | http://localhost:3000 |
| Backend API | http://localhost:8000 |
| API Docs | http://localhost:8000/docs |
For development, you can run the services locally without Docker.
- Python 3.11+
- Node.js 20+
- npm
./start_app.shThis script sets up the Python environment, installs dependencies, and starts both services.
Terminal 1: Backend
cd backend
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Create .env and add your API keys
uvicorn app:app --host 0.0.0.0 --port 8000 --reloadTerminal 2: Frontend
cd frontend
npm install
npm run devConfigure the application via environment variables in .env:
| Variable | Description | Options |
|---|---|---|
LLM_PROVIDER |
AI provider to use | gemini (default), openai |
GEMINI_API_KEY |
Google Gemini API key | Required if using Gemini |
OPENAI_API_KEY |
OpenAI API key | Required if using OpenAI |
Note: Restart the application after changing the LLM provider.
- Open http://localhost:3000
- Click Get Started
- Upload a CSV or Excel file
- Explore your data with the AI-powered chat
- Clean data using intelligent suggestions
- Visualize with dynamic charts
- Build custom reports and export them
| Layer | Technologies |
|---|---|
| Frontend | Next.js 16, React 19, TypeScript, Tailwind CSS 4 |
| UI Components | Radix UI, Lucide Icons, Framer Motion |
| Data Visualization | Plotly.js, React-Plotly |
| Backend | FastAPI, Python 3.11+, Pydantic |
| AI/LLM | Google Gemini API, OpenAI API |
| Data Processing | Pandas, NumPy, OpenPyXL |
| Deployment | Docker, Docker Compose |
┌─────────────────┐ ┌─────────────────┐
│ Frontend │──────▶│ Backend │
│ (Next.js) │◀──────│ (FastAPI) │
│ Port 3000 │ │ Port 8000 │
└─────────────────┘ └────────┬────────┘
│
┌────────▼────────┐
│ LLM Provider │
│ Gemini / OpenAI │
└─────────────────┘
MIT License - See LICENSE for details.