Build and deploy autonomous and multi-agent systems powered by large language models (LLMs).
- Cloud Platforms for Agents
- Context Processing
- Foundation Models Providers
- Inference Providers
- Interoperability Protocols
- Local LLM Tools
- Observability
- Orchestration Frameworks
- Sandboxes
- Learning resources
- Amazon Bedrock - The AWS platform for building generative AI applications and agents.
- Vertex AI Agent Builder - A suite of Google Cloud products designed to build, scale, and manage AI agents in production environments.
- Harrier OSS - A family of multilingual text embedding models developed by Microsoft.
- OpenAI Embedding Models - Large and small models developed by OpenAI.
- Huggin Face Transformers - HF library for Transformers with hundreds of models.
- Anthropic Claude - Foundational models such as Haiku, Opus, and Sonnet.
- Google DeepMind - The Gemini family and Gemma open models, spanning multimodal and lightweight use cases.
- Meta LLaMA - A family of open-weight language models designed for developers and researchers, supporting fine-tuning, adaptation, and deployment across a broad ecosystem.
- Open AI - Frontier and specialized models for text, image, speech-to-speech, text-to-speech and transcription tasks.
- Cerebras - High-performance AI inference infrastructure focused on large-scale workloads and low-latency execution.
- Cohere - Enterprise-oriented language models and inference APIs for NLP and retrieval-based applications.
- Fal - Platform for running and fine-tuning generative media models (image, video, audio) using serverless and on-demand GPU infrastructure.
- Hyperbolic - Open-access cloud platform for running and serving AI models.
- Featherless - Infrastructure for deploying and serving open-weight models with minimal setup.
- Fireworks - Inference platform for open-source models with optimization for performance, scalability, and customization.
- Groq - Low-latency inference platform powered by custom hardware for deterministic model execution.
- HF Inference - Serverless inference APIs provided by Hugging Face for deploying and consuming machine learning models.
- Novita - Unified API platform for accessing and deploying multiple models and running agent-based workflows.
- Nscale - Infrastructure provider covering compute, storage, and deployment for AI systems across environments.
- ovhOVH AI Endpoints - Managed APIs for integrating and serving machine learning and generative AI models.
- Public AI - Open-source and nonprofit initiative providing shared infrastructure for public AI model access and experimentation.
- Replicate - Platform for running, deploying, and fine-tuning models via API-based workflows.
- SambaNova - AI inference systems built on specialized hardware and software for large-scale model execution.
- Scaleway - Cloud platform supporting the deployment and scaling of AI models and applications.
- Together AI - Platform for training, fine-tuning, and serving open and research-driven AI models.
- WaveSpeedAI - Infrastructure for accelerating generative media workloads, particularly image and video models.
- Zai - Platform providing access to conversational AI and agent-based systems.
- Agent2Agent (A2A) - An open standard designed to enable seamless communication and collaboration between AI agents.
- Agent Payments Protocol (AP2) - An open protocol for the emerging Agent Economy. It enables secure, reliable, and interoperable agent commerce for developers, merchants, and the payments industry.
- Model Context Protocol - (MCP) - An open-source standard for connecting AI applications to external systems.
- DiffusionBee - Desktop application for running generative models locally, with a focus on image generation.
- Docker Model Runner - Tooling for managing, running, and deploying AI models within Docker-based environments.
- Draw Things - Application for running image generation models locally, with support for offline workflows.
- Jan - Local-first AI assistant designed to run models privately on user devices.
- JellyBox - Environment for running AI models locally with full offline support.
- Lemonade - Open-source local AI runtime for deploying and interacting with models on personal hardware.
- Local AI - Self-hosted AI stack for running language models, agents, and related workloads locally.
- llama.cpp - Lightweight inference engine in C/C++ for running large language models on local hardware.
- LM Studio - Desktop interface for discovering, running, and interacting with local language models.
- MLX LM - Python library for inference and fine-tuning of language models on Apple Silicon using MLX.
- Ollama - Tool for running and managing language models locally with a simplified CLI and API.
- SGLang - High-performance framework for serving language and multimodal models.
- Unsloth - Toolkit for running and fine-tuning models locally, with support for offline environments.
- vLLM - Inference and serving engine optimized for throughput and memory efficiency in LLM workloads.
- LangSmith Platform - Framework-agnostic platform for monitoring, evaluating, and debugging LLM applications and agents.
- Deep Agents - Open-source agent framework for long-running tasks, with support for planning, context management, and multi-agent coordination.
- Google Agent Development Kit (ADK) - Framework for building AI agents with a model-agnostic and deployment-agnostic design.
- LangChain - Open-source framework providing abstractions, integrations, and tooling for building LLM-powered applications.
- LangGraph - Low-level orchestration framework for building and running stateful, long-lived agent workflows.
- Microsoft Agent Framework - Framework for developing agent-based systems, supporting both simple interactions and multi-agent workflows with graph-based orchestration in .NET and Python.
- Amazon Bedrock AgentCore - Managed environment for deploying and running AI agents with support for multiple models and frameworks.
- Daytona - Infrastructure for executing AI-generated code in isolated and reproducible environments.
- Modal Sandboxes - Serverless container-based environments for running AI-generated code with support for dynamic configuration and GPU workloads.
- Runloop - Ephemeral development environments for executing code in isolation, with support for agent-based workflows and evaluation pipelines.
- 5-Day AI Agents Intensive Course with Google - Learn guide so anyone can explore the foundations, architecture and practical development of AI agents.
- 5-Day Gen AI Intensive Course with Google - Learning guide for exploring the fundamental technologies and techniques behind Generative AI.
- AI Agents Course - This free course will take you on a journey, from beginner to expert, in understanding, using and building AI agents.
- MCP Course - This free course, built in partnership with Anthropic, will take you on a journey, from beginner to informed, in understanding, using, and building applications with MCP.
- Ambient Agents with LangGraph - Build your own ambient agent to manage your email. You’ll learn the fundamentals of LangGraph as you build an email assistant from scratch, and use LangSmith to evaluate its performance.
- Building Reliable Agents - Take an agent from first run to production-ready system through iterative cycles of improvement with LangSmith, the agent engineering platform for observing and evaluating agents.
- Deep Agents - Learn the fundamental characteristics of Deep Agents and how to implement your own Deep Agent for complex, long-running tasks.
- Deep Research with LangGraph - Build your own deep research agent to handle research tasks. Learn how to use LangGraph to build a multi-agent system, then use LangSmith to evaluate its performance.
- Introduction to Agent Observability & Evaluations - Learn the essentials of agent observability & evaluations with LangSmith. Continuously improve your agents with LangSmith's tools for observability, evaluation, and prompt engineering.
- Introduction to LangChain - Python - Learn how to build AI agents with LangChain. Get started quickly using pre-built architectures and model integrations, then debug your agents with LangSmith Observability.
- Introduction to LangGraph - Python - Learn the basics of LangGraph, the framework helps developers add better precision and control into agentic workflows.
- Quickstart courses - Collection of quickstart courses about LangChain, LangGraph and LangSmith.
- AI Agents for Beginners - A course teaching everything you need to know to start building AI Agents.
- MCP for Beginners - Learn MCP with Hands-on Code Examples in C#, Java, JavaScript, Rust, Python, and TypeScript.
- Python + Agentes: Creando agentes y flujos de IA - [Spanish version] A series that explores the foundational concepts behind building AI agents in Python using the Microsoft Agent Framework.
- Python + Agents: Building AI agents and workflows - [English version] A series that explores the foundational concepts behind building AI agents in Python using the Microsoft Agent Framework.
Your contributions and suggestions are heartily welcome. Please check the Contributing Guidelines for more details.