| title | emoji | colorFrom | colorTo | sdk | sdk_version | app_file | pinned | license | short_description |
|---|---|---|---|---|---|---|---|---|---|
AgenticRAG |
🚀 |
indigo |
yellow |
streamlit |
1.50.0 |
app.py |
false |
mit |
PDF Chat with AgenticAI. |
This project is a agentic AI framework with hybrid RAG system. It demonstrates how to build an LLM-powered agent that can:
- Reason step by step (Plan → Act → Reflect → Answer).
- Use tools (calculator, RAG, internet search, summarization, memory lookup).
- Store and retrieve knowledge with long-term memory.
- Enforce guardrails so the agent stays grounded to documents.
It is designed to be simple, extensible, and easy to plug into applications (e.g., Streamlit chatbots).
Accesible in Hugging Face space - https://huggingface.co/spaces/polojuan/agenticAI
- LLM Adapter
- Wraps OpenAI (or another provider).
- Handles completions, token limits, and temperature.
- Can be swapped with Anthropic, local models, etc.
-
Agent Core
- The reasoning loop that implements the ReAct pattern:
- Thought → plan next step.
- Action → decide which tool to call.
- Observation → receive tool output.
- Reflection → critique correctness.
- Repeat until Final Answer.
- The reasoning loop that implements the ReAct pattern:
-
Tools Registry
- Each tool has a name, description, JSON schema, and a Python function.
- Tools include:
- rag → retrieval-augmented generation
- internet → limited web search
- search_memory / write_memory → semantic memory access
- You can add any other tool required
-
Memory
- JSONL file for long-term notes.
- TF-IDF-lite search for retrieval.
- Stores goals, tool calls, answers, and notes.
-
Critic Module
- After each step, a “critic” pass reviews correctness and safety.
- Detects tool misuse, hallucinations, or off-topic answers.
- Suggests corrections.
Here’s the step-by-step workflow:
-
User Goal: The user submits a query (e.g., “Summarize this article”).
-
Context Building:
- Agent retrieves relevant memory.
- Injects available tool manifest.
- Builds a structured system prompt with rules.
-
LLM Reasoning Loop:
- LLM outputs:
Thought: …→ internal reasoning.Action: <tool> {args}→ request to use a tool.Final: …→ final answer.
-
Tool Execution:
- Python validates arguments against schema.
- Executes the tool (e.g., call
tool_rag). - Returns
Observation[...]with results.
-
Reflection:
- A critic LLM checks if the last step is correct/safe.
- Flags errors (e.g., wrong tool, hallucination).
- Suggests adjustments.
-
Loop Control:
- Continues until:
Final:is reached, OR- Max steps/tokens hit → fallback answer.
-
Answer Storage:
- Final answer stored in memory with metadata.
- Can be retrieved in future queries.
- Python 3.8+
- Docker (if using containerized mode)
- Access to a language model (OpenAI API key, local LLM, etc.)
git clone https://github.com/palscruz23/agenticAI.git
cd agenticAI
pip install -r requirements.txt
# Run the app (assuming it’s a Streamlit or web frontend)
streamlit run app.pyThis repository is licensed under the MIT License. See the LICENSE file for more details.