Decides. Acts. Reflects. Evolves.
A production-grade autonomous system that moves beyond simple prompting to demonstrate
graph-based reasoning, self-correction, and human-in-the-loop control.
Most AI projects today follow a linear path:
Prompt → LLM → Response
ISEA v2.0 breaks this mold. It is designed to answer a deeper engineering question:
"How do we build an AI that can decide WHAT to do, evaluate its OWN performance, and improve WITHOUT human intervention?"
This is not a chatbot. It is a Reasoning Engine.
| Feature | Typical Chatbot | 🧠 ISEA Agent |
|---|---|---|
| Control Flow | Linear Script | Dynamic Graph (DAG) |
| Tool Usage | Hardcoded / Forced | Decision-Driven |
| Logic | "Always Answer" | "Think, Plan, Execute" |
| Self-Correction | ❌ None | ✅ Post-Run Reflection |
| Visibility | Black Box | ✅ Live Neural Visualization |
| Safety | ❌ None | ✅ Human-in-the-Loop Gate |
The system operates on a stateful graph where each node represents a distinct cognitive function.
graph TD
User(User Input) --> Router{Router Node}
Router -->|Simple Query| Chat(Chat Node)
Router -->|Complex Request| Planner(Planner Node)
Router -->|Explainability| Explain(Meta Node)
Planner --> Executor{Executor Loop}
Executor -->|Need Tool| Tools(Tool Node)
Tools --> Executor
Executor -->|Step Complete| Executor
Executor -->|Plan Finished| Reporter(Reporter Node)
Chat --> Validator{Validator Loop}
Reporter --> Validator
Validator -->|Pass| Final(Final Response)
Validator -->|Fail / Improve| Reflector(Self-Reflection)
Reflector -->|Update Strategy| Router
- Router: analyzing intent (Research vs. Chat vs. Explanation).
- Planner: Decomposes complex goals into executable steps.
- Executor: The "Agent" that uses tools (Search, Math, Code) to solve steps.
- Validator: Critiques the output for accuracy and safety.
- Human Gate: Intercepts sensitive actions (like saving files) for approval.
We believe AI reasoning shouldn't be hidden. ISEA features a cyberpunk-inspired dashboard that exposes the agent's "brain" in real-time.
- Live Graph State: Watch nodes light up as the agent "thinks".
- Real-Time Telemetry: Monitor token usage, latency, and context window.
- Thought Stream: See the raw internal monologue and decision paths.
- Glassmorphism Spec: Built with modern Tailwind + Framer Motion for a premium feel.
- Core Logic: Python 3.10+
- Orchestration: LangGraph, LangChain
- Intelligence: Google Gemini 2.0 Flash / OpenAI GPT-4o
- Search: Tavily AI Search
- Framework: Next.js 15 (App Router)
- Styling: TailwindCSS v4 + Glassmorphism
- Animations: Framer Motion
- 3D Elements: Three.js / React Three Fiber
- Python 3.10+
- Node.js 18+
- API Keys: Google Gemini (or OpenAI), Tavily
-
Clone the Repository
git clone https://github.com/AnmollCodes/Research-AI-Agent.git cd Research-AI-Agent -
Backend Setup
python -m venv venv # Windows .\venv\Scripts\activate # Mac/Linux source venv/bin/activate pip install -r requirements.txt
-
Frontend Setup
cd frontend npm install -
Environment Variables Create a
.envfile in the root:GOOGLE_API_KEY=your_key_here TAVILY_API_KEY=your_key_here
-
Run the System
# Terminal 1 (Backend) python api.py # Terminal 2 (Frontend) cd frontend npm run dev
Access the dashboard at http://localhost:3000.
- Long-Term Memory: Vector database integration (Pinecone/Chroma) for persistent context.
- Multi-Modal Support: Image analysis and generation nodes.
- Swarm Mode: Coordination between multiple specialized sub-agents.
- Voice Interface: Real-time voice interaction layer.
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request