Streamlined DFT workflows • HPC automation • Extensible multi‑agent architecture
Docs · Installation Guide · SLURM Guide · Examples
Open Issues · Discussions
DFT Agent is a production‑oriented, tool‑rich autonomous assistant for Density Functional Theory (DFT) and broader computational materials research. It combines:
- A multi‑agent graph built with LangGraph (chat, domain DFT expert, SLURM scheduler)
- A FastAPI backend with streaming, state checkpointing, and structured tool execution
- A Thread‑scoped workspace system for deterministic artifact storage
- A Streamlit frontend for interactive exploration
The system orchestrates structure generation, Quantum ESPRESSO (QE) input authoring, convergence studies, HPC job lifecycle management, and materials data retrieval—while remaining fully extensible via a clean tool registry pattern.
| Category | Capabilities |
|---|---|
| Multi‑Agent | Chatbot, DFT Agent, SLURM Scheduler (pluggable registry) |
| DFT Workflows | QE input generation, k‑point & cutoff convergence, slab/vacuum tuning, structure transforms |
| Materials Data | Materials Project integration, structure parsing & analysis (pymatgen utilities) |
| HPC Automation | SLURM submission, monitoring, queue inspection, resource specification |
| Persistence | SQLite checkpointing + per‑thread filesystem (idempotent tool outputs) |
| Streaming | Server‑Sent Events (token + message events) |
| Extensibility | Decorated tools with deterministic return shapes & workspace routing |
| Safety | Scoped file IO, optional auth secret, configurable model providers |
┌──────────────────────────┐
│ Streamlit Frontend │ interactive chat
└─────────────▲────────────┘
│ REST API
┌─────────────┴────────────┐
│ FastAPI Service │ routing, streaming, feedback
│ • /agent/invoke │
│ • /agent/stream │
│ • /agent/history │
└─────────────▲────────────┘
│ LangGraph compiled graphs
┌─────────────┴────────────┐
│ Agent Graphs │ research_agent, dft_agent, slurm_agent
│ • Routing / Planning │
│ • Tool Execution Node │
└─────────────▲────────────┘
│ Tool calls
┌─────────────┴────────────┐
│ Tool Registry │ ASE, QE, MP, SLURM
└─────────────▲────────────┘
│ Workspace API
┌─────────────┴────────────┐
│ Thread Workspace │ structured directories + artifacts
└──────────────────────────┘
Key modules: backend/api/main.py, backend/agents/library/, backend/agents/dft_tools/, backend/utils/workspace.py.
- Python 3.12+
uv(or fallback topip)- At least one model provider key (OpenAI, Groq, HF, Cerebras, Ollama)
git clone https://github.com/abir0/dft-agent.git
cd dft-agent
uv sync # installs dependencies from pyproject.toml / uv.lock
cp env.example .env
# Start backend API
uv run python backend/run_service.py
# In a second terminal start the frontend
uv run streamlit run frontend/app.pydocker-compose up --buildAccess:
- Web UI: http://localhost:8501
- API Root: http://localhost:8083
- OpenAPI Docs: http://localhost:8083/docs
| Variable | Purpose |
|---|---|
| MODE | Set to dev for hot reload behaviors |
| AUTH_SECRET | Optional bearer token for protected endpoints |
| OPENAI_API_KEY | OpenAI model access |
| CEREBRAS_API_KEY | Cerebras models |
| GROQ_API_KEY | Groq LLaMA models |
| HF_API_KEY | HuggingFace Inference / endpoints |
| OLLAMA_MODEL | Local Ollama model name (e.g. llama3) |
| OLLAMA_BASE_URL | Ollama server URL |
| USE_FAKE_MODEL | Set true to enable deterministic stub model |
| DEFAULT_MODEL | Override auto-selected default model |
| MP_API_KEY | Materials Project queries |
| ASTA_KEY | Scholarly literature / Asta MCP tools |
| DATABASE_URL | Optional external DB (currently unused placeholder) |
If no real API keys are provided and USE_FAKE_MODEL is not true, startup will raise an error.
| Agent | Key | Description |
|---|---|---|
| Chatbot | chatbot |
General assistant (search, calculator, Python REPL, literature) |
| DFT Agent | dft_agent |
Domain workflow planner + materials + QE + databases |
| SLURM Scheduler | slurm_scheduler |
HPC job submission, queue mgmt, monitoring |
Switch agents via frontend settings or by passing agent param to the API.
- Structure manipulation & generation (
structure_tools.py) - Quantum ESPRESSO input & execution helpers (
qe_tools.py) - Convergence & parameter studies (
convergence_tools.py) - Materials Project search / retrieval (
pymatgen_tools.py, MP API) - SLURM job lifecycle (
slurm_tools.py) - Local database & result curation (
database_tools.py)
Large outputs are stored under the active thread workspace: WORKSPACE/<thread_id>/....
WORKSPACE/<thread_id>/
calculations/
databases/
results/
structures/
pseudos/
...
POST /agent/invoke
{
"input": "Generate a QE input for fcc Cu and run a cutoff convergence study",
"thread_id": "<uuid>",
"agent": "dft_agent"
}Streaming: GET /agent/stream?thread_id=<uuid>&agent=dft_agent&input=... yields token and message events followed by [DONE].
The Streamlit UI provides:
- Real‑time streaming transcripts
- Agent + model selector
- New chat / shareable thread IDs
- Basic privacy notice & feedback hook
Launch (after backend):
uv run streamlit run frontend/app.py- Python project managed by
uv - Ruff / type checking recommended (configure locally)
- Modular tool registration for easy extension
- Create function in
backend/agents/dft_tools/<new_tool>.pywith@tooldecorator. - Persist large artifacts inside the active workspace using provided helpers.
- Import and register in the central registry (see
tool_registry.py). - Add a lightweight test verifying registry presence.
uv run pytest -q- Additional DFT engines (VASP / CASTEP adapters)
- Advanced multi-step automatic planners
- Result visualization panels (band structures, DOS)
- Caching & reuse of convergence curves
- Expanded materials databases integration
We welcome focused, well-scoped contributions:
- Open an issue describing the change.
- Fork & branch:
feature/<feature-name>. - Add/update tests & docs for user-visible behavior.
- Submit PR referencing the issue; keep commits atomic.
Distributed under the MIT License. See LICENSE.
- LangGraph
- pymatgen
- Materials Project
- ASE
- Quantum ESPRESSO
- Broader computational materials science community
Have ideas? Open an issue or start a discussion.
Happy computing ⚛️