OpenMemory gives AI agents real long-term memory. Not vector search. Not RAG. Actual memory.
-
Updated
Dec 12, 2025 - TypeScript
OpenMemory gives AI agents real long-term memory. Not vector search. Not RAG. Actual memory.
Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.
Git based Version Control File System for joint management of code, data, model and their relationship.
Track, sync & share AI coding sessions across your team. Context that survives beyond the chat. Stop re-explaining yourself every session. Currently works with Claude Code.
Distributed data mesh for real-time access, migration, and replication across diverse databases — built for AI, security, and scale.
Stop paying for AI APIs during development. LocalCloud runs everything locally - GPT-level models, databases, all free.
TME: Structured memory engine for LLM agents to plan, rollback, and reason across multi-step tasks.
GPU-aware inference mesh for large-scale AI serving
A curated list of awesome tools, frameworks, platforms, and resources for building scalable and efficient AI infrastructure, including distributed training, model serving, MLOps, and deployment.
Production-ready AI infrastructure: RAG with smart reindexing, persistent memory, browser automation, and MCP integration. Stop rebuilding tools for every AI project.
UniRobot is an embodied intelligent software framework that integrates the robot brain (data, models, model training) with the robot body (perception, model inference, control).
Agentic Reliability Framework - Production-grade multi-agent AI system for infrastructure reliability monitoring and self-healing. PyPI: https://pypi.org/project/agentic-reliability-framework/
This repository contains a list of various service-specific Azure Landing Zone implementation options.
Lightfast is a neural memory system for teams. It indexes code, docs, tickets, and conversations so people and AI agents can search by meaning, get answers with sources, and trace decisions across their organization
Secure Computing in the AI age
A modular, self-contained file format for executable AI prompts, logic, and data.
Enterprise-grade LLM routing microservice with multi-provider support, intelligent failover, and cost optimization.
Rent ready-to-use cloud GPUs in seconds. Lium CLI makes it easy to launch, manage, and scale GPU compute directly from your terminal. Fast, cost-optimized, and built for AI & ML developers.
AI Memory Prototype created by sam33rch to learn more about the problem through building.
Add a description, image, and links to the ai-infrastructure topic page so that developers can more easily learn about it.
To associate your repository with the ai-infrastructure topic, visit your repo's landing page and select "manage topics."