A FastAPI server wrapper for Cognee that exposes AI memory graph functionality via REST API. Perfect for integration with Claude Code Skills, automation workflows, and building AI-powered applications.
Cognee API provides a RESTful interface to Cognee's powerful knowledge graph capabilities, enabling you to:
- Ingest and process text data into structured knowledge graphs
- Build semantic relationships between concepts
- Query AI memory using natural language
- Manage and enrich knowledge bases programmatically
- RESTful API - Clean HTTP endpoints for all Cognee operations
- Async by Default - Built on FastAPI for high-performance async operations
- Type Safety - Full Pydantic validation for requests and responses
- Multiple Search Modes - Graph completion, insights extraction, and coding rules
- Multi-tenant Support - User and node set isolation for different contexts
- Production Ready - Health checks, error handling, and structured responses
pip install fastapi uvicorn cognee httpxCreate a .env file based on .env.example:
LLM_API_KEY=your-openai-api-key-here
# LLM_PROVIDER=ollama # Uncomment for local LLM usage# Development mode with auto-reload
uvicorn server:app --reload
# Production mode
uvicorn server:app --host 0.0.0.0 --port 8000The server will start at http://localhost:8000. Visit http://localhost:8000/docs for interactive API documentation.
-
Create your
.envfile in the project directory with your environment variables:LLM_API_KEY=your-api-key-here
-
Run the deployment script:
./deploy.sh
After deploying files to the server, SSH into eva_cognee and set up the environment:
# SSH into the server
ssh root@eva_cognee
# Navigate to the project directory
cd /root/cognee-api
# Create a virtual environment
python3 -m venv venv
# Activate the virtual environment
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Run the server
python server.pyThe server will start on http://0.0.0.0:8000 (accessible at http://eva_cognee:8000 or http://192.168.30.33:8000)
To run the server after initial setup:
cd /root/cognee-api
source venv/bin/activate
python server.pyGET /healthPOST /add
Content-Type: application/json
{
"text": "Your text content here",
"user_id": "optional-user-id",
"node_set": "optional-node-set",
"dataset_name": "optional-dataset-name"
}POST /cognify
Content-Type: application/json
{
"datasets": ["dataset1", "dataset2"]
}POST /memify
Content-Type: application/json
{
"dataset": "dataset-name",
"extraction_tasks": [],
"enrichment_tasks": []
}POST /search
Content-Type: application/json
{
"query_text": "What is Cognee?",
"query_type": "GRAPH_COMPLETION",
"user_id": "optional-user-id",
"node_set": "optional-node-set",
"node_name": []
}Search Types:
GRAPH_COMPLETION- General knowledge graph queriesINSIGHTS- Extract insights and patternsCODING_RULES- Query coding-specific rules
POST /delete
Content-Type: application/json
{
"data_id": "data-id-to-delete"
}POST /pruneimport asyncio
from client import CogneeClient, SearchType
async def main():
client = CogneeClient(base_url="http://localhost:8000")
# Ingest data
add_result = await client.add(
"Cognee builds AI memory graphs.",
dataset_name="demo"
)
# Build knowledge graph
await client.cognify(datasets=["demo"])
# Enrich (optional)
await client.memify(dataset="demo")
# Query the graph
search_result = await client.search(
"What is Cognee?",
query_type=SearchType.GRAPH_COMPLETION
)
print(search_result["data"])
await client.close()
asyncio.run(main())- Ingest - Add your text data using
/add - Process - Build the knowledge graph with
/cognify - Enrich - Optionally enhance with
/memify - Query - Search your knowledge base with
/search - Manage - Clean up with
/deleteor/prune
All endpoints return consistent JSON responses:
{
"success": true,
"data": {},
"message": "Operation description"
}Error responses follow the same structure with "success": false.
cognee-api/
├── server.py # FastAPI server implementation
├── client.py # Python client library
├── .env.example # Environment variables template
├── README.md # This file
└── CLAUDE.md # Claude Code integration guide
- Python 3.8+
- FastAPI
- Uvicorn
- Cognee
- httpx (for client)
- Pydantic
- Claude Code Skills - Build AI assistants with persistent memory
- Documentation Q&A - Ingest docs and query with natural language
- Knowledge Management - Create searchable knowledge bases
- Code Analysis - Extract and query coding patterns and rules
- Research Tools - Build semantic research databases
[Add your license here]
[Add contribution guidelines here]
For issues and questions:
- Cognee documentation: Cognee GitHub
- FastAPI documentation: FastAPI Docs