Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
240 changes: 240 additions & 0 deletions .cursorrules

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here

# Model names (optional - sensible defaults)
ANTHROPIC_MODEL=claude-3-5-sonnet-20240620
OPENAI_MODEL=gpt-4-turbo
# Model names (optional - sensible defaults set in config.py)
# ANTHROPIC_MODEL=claude-sonnet-4-5-20250929
# OPENAI_MODEL=gpt-5.1

# ============== EMBEDDINGS ==============

Expand Down
203 changes: 203 additions & 0 deletions .github/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,203 @@
---
title: DeepCritical
emoji: 🧬
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: "6.0.1"
python_version: "3.11"
app_file: src/app.py
pinned: false
license: mit
tags:
- mcp-in-action-track-enterprise
- mcp-hackathon
- drug-repurposing
- biomedical-ai
- pydantic-ai
- llamaindex
- modal
---

# DeepCritical

## Intro

## Features

- **Multi-Source Search**: PubMed, ClinicalTrials.gov, bioRxiv/medRxiv
- **MCP Integration**: Use our tools from Claude Desktop or any MCP client
- **Modal Sandbox**: Secure execution of AI-generated statistical code
- **LlamaIndex RAG**: Semantic search and evidence synthesis
- **HuggingfaceInference**:
- **HuggingfaceMCP Custom Config To Use Community Tools**:
- **Strongly Typed Composable Graphs**:
- **Specialized Research Teams of Agents**:

## Quick Start

### 1. Environment Setup

```bash
# Install uv if you haven't already
pip install uv

# Sync dependencies
uv sync
```

### 2. Run the UI

```bash
# Start the Gradio app
uv run gradio run src/app.py
```

Open your browser to `http://localhost:7860`.

### 3. Connect via MCP

This application exposes a Model Context Protocol (MCP) server, allowing you to use its search tools directly from Claude Desktop or other MCP clients.

**MCP Server URL**: `http://localhost:7860/gradio_api/mcp/`

**Claude Desktop Configuration**:
Add this to your `claude_desktop_config.json`:
```json
{
"mcpServers": {
"deepcritical": {
"url": "http://localhost:7860/gradio_api/mcp/"
}
}
}
```

**Available Tools**:
- `search_pubmed`: Search peer-reviewed biomedical literature.
- `search_clinical_trials`: Search ClinicalTrials.gov.
- `search_biorxiv`: Search bioRxiv/medRxiv preprints.
- `search_all`: Search all sources simultaneously.
- `analyze_hypothesis`: Secure statistical analysis using Modal sandboxes.


## Deep Research Flows

- iterativeResearch
- deepResearch
- researchTeam

### Iterative Research

sequenceDiagram
participant IterativeFlow
participant ThinkingAgent
participant KnowledgeGapAgent
participant ToolSelector
participant ToolExecutor
participant JudgeHandler
participant WriterAgent

IterativeFlow->>IterativeFlow: run(query)

loop Until complete or max_iterations
IterativeFlow->>ThinkingAgent: generate_observations()
ThinkingAgent-->>IterativeFlow: observations

IterativeFlow->>KnowledgeGapAgent: evaluate_gaps()
KnowledgeGapAgent-->>IterativeFlow: KnowledgeGapOutput

alt Research complete
IterativeFlow->>WriterAgent: create_final_report()
WriterAgent-->>IterativeFlow: final_report
else Gaps remain
IterativeFlow->>ToolSelector: select_agents(gap)
ToolSelector-->>IterativeFlow: AgentSelectionPlan

IterativeFlow->>ToolExecutor: execute_tool_tasks()
ToolExecutor-->>IterativeFlow: ToolAgentOutput[]

IterativeFlow->>JudgeHandler: assess_evidence()
JudgeHandler-->>IterativeFlow: should_continue
end
end


### Deep Research

sequenceDiagram
actor User
participant GraphOrchestrator
participant InputParser
participant GraphBuilder
participant GraphExecutor
participant Agent
participant BudgetTracker
participant WorkflowState

User->>GraphOrchestrator: run(query)
GraphOrchestrator->>InputParser: detect_research_mode(query)
InputParser-->>GraphOrchestrator: mode (iterative/deep)
GraphOrchestrator->>GraphBuilder: build_graph(mode)
GraphBuilder-->>GraphOrchestrator: ResearchGraph
GraphOrchestrator->>WorkflowState: init_workflow_state()
GraphOrchestrator->>BudgetTracker: create_budget()
GraphOrchestrator->>GraphExecutor: _execute_graph(graph)

loop For each node in graph
GraphExecutor->>Agent: execute_node(agent_node)
Agent->>Agent: process_input
Agent-->>GraphExecutor: result
GraphExecutor->>WorkflowState: update_state(result)
GraphExecutor->>BudgetTracker: add_tokens(used)
GraphExecutor->>BudgetTracker: check_budget()
alt Budget exceeded
GraphExecutor->>GraphOrchestrator: emit(error_event)
else Continue
GraphExecutor->>GraphOrchestrator: emit(progress_event)
end
end

GraphOrchestrator->>User: AsyncGenerator[AgentEvent]

### Research Team
Critical Deep Research Agent

## Development

### Run Tests

```bash
uv run pytest
```

### Run Checks

```bash
make check
```

## Architecture

DeepCritical uses a Vertical Slice Architecture:

1. **Search Slice**: Retrieving evidence from PubMed, ClinicalTrials.gov, and bioRxiv.
2. **Judge Slice**: Evaluating evidence quality using LLMs.
3. **Orchestrator Slice**: Managing the research loop and UI.

Built with:
- **PydanticAI**: For robust agent interactions.
- **Gradio**: For the streaming user interface.
- **PubMed, ClinicalTrials.gov, bioRxiv**: For biomedical data.
- **MCP**: For universal tool access.
- **Modal**: For secure code execution.

## Team

- The-Obstacle-Is-The-Way
- MarioAderman
- Josephrp

## Links

- [GitHub Repository](https://github.com/The-Obstacle-Is-The-Way/DeepCritical-1)
61 changes: 47 additions & 14 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,33 +2,66 @@ name: CI

on:
push:
branches: [main, dev]
branches: [main, develop]
pull_request:
branches: [main, dev]
branches: [main, develop]

jobs:
check:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11"]

steps:
- uses: actions/checkout@v4

- name: Install uv
uses: astral-sh/setup-uv@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
version: "latest"

- name: Set up Python 3.11
run: uv python install 3.11
python-version: ${{ matrix.python-version }}

- name: Install dependencies
run: uv sync --all-extras
run: |
python -m pip install --upgrade pip
pip install -e ".[dev]"

- name: Lint with ruff
run: uv run ruff check src tests
run: |
ruff check . --exclude tests
ruff format --check . --exclude tests

- name: Type check with mypy
run: uv run mypy src
run: |
mypy src

- name: Install embedding dependencies
run: |
pip install -e ".[embeddings]"

- name: Run unit tests (excluding OpenAI and embedding providers)
env:
HF_TOKEN: ${{ secrets.HF_TOKEN }}
run: |
pytest tests/unit/ -v -m "not openai and not embedding_provider" --tb=short -p no:logfire

- name: Run local embeddings tests
env:
HF_TOKEN: ${{ secrets.HF_TOKEN }}
run: |
pytest tests/ -v -m "local_embeddings" --tb=short -p no:logfire || true
continue-on-error: true # Allow failures if dependencies not available

- name: Run HuggingFace integration tests
env:
HF_TOKEN: ${{ secrets.HF_TOKEN }}
run: |
pytest tests/integration/ -v -m "huggingface and not embedding_provider" --tb=short -p no:logfire || true
continue-on-error: true # Allow failures if HF_TOKEN not set

- name: Run tests
run: uv run pytest tests/unit/ -v
- name: Run non-OpenAI integration tests (excluding embedding providers)
env:
HF_TOKEN: ${{ secrets.HF_TOKEN }}
run: |
pytest tests/integration/ -v -m "integration and not openai and not embedding_provider" --tb=short -p no:logfire || true
continue-on-error: true # Allow failures if dependencies not available
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
folder/
.cursor/
.ruff_cache/
# Python
__pycache__/
*.py[cod]
Expand Down
45 changes: 44 additions & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,19 +3,62 @@ repos:
rev: v0.4.4
hooks:
- id: ruff
args: [--fix]
args: [--fix, --exclude, tests]
exclude: ^reference_repos/
- id: ruff-format
args: [--exclude, tests]
exclude: ^reference_repos/

- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.10.0
hooks:
- id: mypy
files: ^src/
exclude: ^folder
additional_dependencies:
- pydantic>=2.7
- pydantic-settings>=2.2
- tenacity>=8.2
- pydantic-ai>=0.0.16
args: [--ignore-missing-imports]

- repo: local
hooks:
- id: pytest-unit
name: pytest unit tests (no OpenAI)
entry: uv
language: system
types: [python]
args: [
"run",
"pytest",
"tests/unit/",
"-v",
"-m",
"not openai and not embedding_provider",
"--tb=short",
"-p",
"no:logfire",
]
pass_filenames: false
always_run: true
require_serial: false
- id: pytest-local-embeddings
name: pytest local embeddings tests
entry: uv
language: system
types: [python]
args: [
"run",
"pytest",
"tests/",
"-v",
"-m",
"local_embeddings",
"--tb=short",
"-p",
"no:logfire",
]
pass_filenames: false
always_run: true
require_serial: false
14 changes: 14 additions & 0 deletions .pre-commit-hooks/run_pytest.ps1
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# PowerShell pytest runner for pre-commit (Windows)
# Uses uv if available, otherwise falls back to pytest

if (Get-Command uv -ErrorAction SilentlyContinue) {
uv run pytest $args
} else {
Write-Warning "uv not found, using system pytest (may have missing dependencies)"
pytest $args
}





Loading
Loading