A pure Clojure wrapper for LangChain4j - idiomatic, unopinionated, and data-driven.
LangChain4Clj is a pure translation layer - we wrap LangChain4j's functionality in idiomatic Clojure without adding opinions, prompts, or behaviors. You get the full power of LangChain4j with Clojure's simplicity.
- Multiple LLM Providers - OpenAI, Anthropic, Google AI Gemini, Vertex AI Gemini, Mistral, Ollama
- Model Presets - Pre-configured models with sensible defaults for quick setup
- Environment Variable Resolution - Secure API key management with
[:env "VAR"]pattern - JSON Schema Converter - Define tool parameters with standard JSON Schema syntax
- Image Generation - DALL-E 3 & DALL-E 2 support with HD quality and style control
- Provider Failover - Automatic retry and fallback for high availability
- Streaming Responses - Real-time token streaming for better UX
- Tool/Function Calling - Unified support for Spec, Schema, and Malli
- Assistant System - Memory management, tool execution loops, templates
- Structured Output - Automatic parsing with retry logic
- Multi-Agent Orchestration - Sequential, parallel, and collaborative agents
- Chat Listeners - Observability, token tracking, and event handling
- Thinking/Reasoning Modes - Extended thinking for OpenAI o1/o3, Anthropic, Gemini
- Message Serialization - Convert messages to EDN/JSON for persistence
- Provider-Specific Options - Full access to Anthropic cache, OpenAI organization, Gemini code execution
- 100% Data-Driven - Configure everything with Clojure maps
- Idiomatic API - Threading-first, composable, pure Clojure
Add to your deps.edn:
{:deps {io.github.nandoolle/langchain4clj {:mvn/version "1.6.1"}}}With schema libraries (optional):
;; For Plumatic Schema support
{:deps {...}
:aliases {:with-schema {:extra-deps {prismatic/schema {:mvn/version "1.4.1"}}}}}
;; For Malli support
{:deps {...}
:aliases {:with-malli {:extra-deps {metosin/malli {:mvn/version "0.16.4"}}}}}(require '[langchain4clj.core :as llm])
;; Create a model
(def model (llm/create-model {:provider :openai
:api-key (System/getenv "OPENAI_API_KEY")
:model "gpt-4"}))
;; Simple chat
(llm/chat model "Explain quantum computing in one sentence")
;; => "Quantum computing harnesses quantum mechanical phenomena..."
;; Chat with options
(llm/chat model "Explain quantum computing"
{:temperature 0.3 ; Control randomness
:max-tokens 100 ; Limit response length
:system-message "You are a physics teacher"})
;; => Returns ChatResponse with metadataUse pre-configured models with sensible defaults:
(require '[langchain4clj.presets :as presets])
(require '[langchain4clj.core :as llm])
;; Get a preset and create model
(def model
(llm/create-model
(presets/get-preset :openai/gpt-4o
{:api-key [:env "OPENAI_API_KEY"]})))
;; Available presets
(presets/available-presets)
;; => (:openai/gpt-4o :openai/gpt-4o-mini :openai/o3-mini
;; :anthropic/claude-sonnet-4 :anthropic/claude-opus-4
;; :google/gemini-2-5-flash :google/gemini-2-5-pro ...)
;; Reasoning models with thinking enabled
(def reasoning-model
(llm/create-model
(presets/get-preset :anthropic/claude-sonnet-4-reasoning
{:api-key [:env "ANTHROPIC_API_KEY"]})))Secure API key management with [:env "VAR"] pattern:
;; Instead of hardcoding API keys...
(llm/create-model {:provider :openai
:api-key [:env "OPENAI_API_KEY"] ; Resolved at runtime
:base-url [:env "OPENAI_BASE_URL"] ; Optional
:model "gpt-4o"})
;; Works with nested configs
(llm/create-model {:provider :anthropic
:api-key [:env "ANTHROPIC_API_KEY"]
:anthropic {:cache-system-messages true}})
;; For testing, override env vars
(binding [llm/*env-overrides* {"OPENAI_API_KEY" "test-key"}]
(llm/create-model {:provider :openai
:api-key [:env "OPENAI_API_KEY"]}));; OpenAI
(def openai-model
(llm/create-model {:provider :openai
:api-key (System/getenv "OPENAI_API_KEY")
:model "gpt-4o-mini"}))
;; Anthropic Claude
(def claude-model
(llm/create-model {:provider :anthropic
:api-key (System/getenv "ANTHROPIC_API_KEY")
:model "claude-3-5-sonnet-20241022"}))
;; Google AI Gemini (Direct API)
(def gemini-model
(llm/create-model {:provider :google-ai-gemini
:api-key (System/getenv "GEMINI_API_KEY")
:model "gemini-1.5-flash"}))
;; Vertex AI Gemini (Google Cloud)
(def vertex-gemini-model
(llm/create-model {:provider :vertex-ai-gemini
:project "your-gcp-project-id"
:location "us-central1"
:model "gemini-1.5-pro"}))
;; Mistral AI
(def mistral-model
(llm/create-model {:provider :mistral
:api-key (System/getenv "MISTRAL_API_KEY")
:model "mistral-medium-2508"}))
;; Ollama (Local models - no API key needed!)
(def ollama-model
(llm/create-model {:provider :ollama
:model "llama3.1"})) ; Requires Ollama running locally
;; Helper functions (alternative API)
(def gemini (llm/google-ai-gemini-model {:api-key "..."}))
(def vertex (llm/vertex-ai-gemini-model {:project "..."}))
(def mistral (llm/mistral-model {:api-key "..."}))
(def ollama (llm/ollama-model {:model "mistral"}))
;; All models work the same way
(llm/chat openai-model "Hello!")
(llm/chat claude-model "Hello!")
(llm/chat gemini-model "Hello!")
(llm/chat vertex-gemini-model "Hello!")
(llm/chat mistral-model "Hello!")
(llm/chat ollama-model "Hello!")The chat function supports all LangChain4j ChatRequest parameters:
(llm/chat model "Your prompt"
{:tools [tool-spec] ; Function calling
:response-format ResponseFormat/JSON ; JSON mode
:system-message "System prompt"
:temperature 0.7
:max-tokens 1000
:top-p 0.9
:top-k 40
:frequency-penalty 0.0
:presence-penalty 0.0
:stop-sequences ["STOP"]
:model-name "gpt-4"}) ; Override modelForce the LLM to return valid JSON (supported by OpenAI, Anthropic):
(require '[dev.langchain4j.model.chat.request ResponseFormat])
;; Option 1: Direct in chat options
(llm/chat model "Return user data as JSON"
{:response-format ResponseFormat/JSON})
;; => ChatResponse with guaranteed valid JSON in .text
;; Option 2: Using helper function
(llm/chat model "Return user data as JSON"
(llm/with-json-mode {:temperature 0.7}))
;; Option 3: Threading-first style
(-> {:temperature 0.7
:max-tokens 500}
llm/with-json-mode
(as-> opts (llm/chat model "Return user data" opts)))
;; Parse the JSON response
(require '[clojure.data.json :as json])
(let [response (llm/chat model "Return user data"
{:response-format ResponseFormat/JSON})
json-str (-> response .aiMessage .text)]
(json/read-str json-str :key-fn keyword))
;; => {:name "John" :age 30 :email "john@example.com"}Why use native JSON mode?
- 100% reliable - Provider guarantees valid JSON
- No parsing errors - No need for retry logic
- Faster - No post-processing validation needed
- Simple - Just parse and use
Tip: For complex structured output with validation, see the structured namespace which builds on JSON mode.
Receive tokens in real-time as they're generated for better UX:
(require '[langchain4clj.streaming :as streaming])
;; Create streaming model
(def streaming-model
(streaming/create-streaming-model
{:provider :openai
:api-key (System/getenv "OPENAI_API_KEY")
:model "gpt-4o-mini"}))
;; Stream with callbacks
(streaming/stream-chat streaming-model "Explain AI in simple terms"
{:on-token (fn [token]
(print token)
(flush))
:on-complete (fn [response]
(println "\nDone!")
(println "Tokens:" (-> response .tokenUsage .totalTokenCount)))
:on-error (fn [error]
(println "Error:" (.getMessage error)))})
;; Accumulate while streaming
(let [accumulated (atom "")
result (promise)]
(streaming/stream-chat streaming-model "Count to 5"
{:on-token (fn [token]
(print token)
(flush)
(swap! accumulated str token))
:on-complete (fn [resp]
(deliver result {:text @accumulated
:response resp}))})
@result)
;; => {:text "1, 2, 3, 4, 5" :response #<Response...>}Why use streaming?
- Better UX - Users see progress immediately
- Feels faster - Perceived latency is lower
- Cancellable - Can stop mid-generation
- Real-time feedback - Process tokens as they arrive
Works with all providers: OpenAI, Anthropic, Google AI Gemini, Vertex AI Gemini, Mistral, Ollama
Tip: See examples/streaming_demo.clj for interactive CLI examples and user-side core.async integration patterns.
Generate images using DALL-E 3 and DALL-E 2:
(require '[langchain4clj.image :as image])
;; Create an image model
(def model (image/create-image-model
{:provider :openai
:api-key (System/getenv "OPENAI_API_KEY")
:model "dall-e-3"
:quality "hd"
:size "1024x1024"}))
;; Or use convenience function
(def model (image/openai-image-model
{:api-key (System/getenv "OPENAI_API_KEY")
:quality "hd"}))
;; Generate image
(def result (image/generate model "A sunset over mountains"))
;; Access results
(:url result) ;; => "https://oaidalleapiprodscus..."
(:revised-prompt result) ;; => "A picturesque view of a vibrant sunset..."
(:base64 result) ;; => nil (or base64 data if requested)Features:
- DALL-E 3 - HD quality, multiple sizes (1024x1024, 1792x1024, 1024x1792)
- DALL-E 2 - Faster and cheaper alternative (512x512, 256x256, 1024x1024)
- Style control - "vivid" (hyper-real) or "natural" (subtle)
- Quality options - "standard" or "hd" for DALL-E 3
- Revised prompts - DALL-E 3 returns enhanced/safety-filtered prompts
- Base64 support - Optional base64 encoding
Examples:
;; HD quality landscape
(def hd-model (image/openai-image-model
{:api-key "sk-..."
:quality "hd"
:size "1792x1024"}))
;; Style variations
(def vivid-model (image/openai-image-model
{:api-key "sk-..."
:style "vivid"})) ; More dramatic
(def natural-model (image/openai-image-model
{:api-key "sk-..."
:style "natural"})) ; More subtle
;; DALL-E 2 (faster, cheaper)
(def dalle2 (image/create-image-model
{:provider :openai
:api-key "sk-..."
:model "dall-e-2"
:size "512x512"}))Tip: See examples/image_generation_demo.clj for comprehensive examples including HD quality, batch generation, and error handling.
Monitor LLM interactions with observability hooks:
(require '[langchain4clj.listeners :as listeners])
;; Track token usage
(def stats (atom {}))
(def tracker (listeners/token-tracking-listener stats))
;; Create model with listener
(def model
(llm/create-model
{:provider :openai
:api-key (System/getenv "OPENAI_API_KEY")
:listeners [tracker]}))
(llm/chat model "Hello!")
@stats
;; => {:input-tokens 10 :output-tokens 15 :total-tokens 25 :request-count 1 ...}
;; Compose multiple listeners
(def combined
(listeners/compose-listeners
(listeners/logging-listener)
(listeners/token-tracking-listener stats)))Pre-built listeners:
logging-listener- Log requests/responsestoken-tracking-listener- Accumulate token statisticsmessage-capturing-listener- Record conversation historycreate-listener- Custom handlers for on-request, on-response, on-error
See docs/LISTENERS.md for complete documentation.
Extended thinking for complex reasoning tasks:
;; OpenAI o3-mini with reasoning
(def model
(-> {:provider :openai
:api-key (System/getenv "OPENAI_API_KEY")
:model "o3-mini"}
(llm/with-thinking {:effort :high :return true})
llm/create-model))
;; Anthropic Claude with extended thinking
(def model
(-> {:provider :anthropic
:api-key (System/getenv "ANTHROPIC_API_KEY")
:model "claude-sonnet-4-20250514"}
(llm/with-thinking {:enabled true :budget-tokens 4096})
llm/create-model))
;; Google Gemini with reasoning
(def model
(-> {:provider :google-ai-gemini
:api-key (System/getenv "GEMINI_API_KEY")
:model "gemini-2.5-flash"}
(llm/with-thinking {:enabled true :effort :medium})
llm/create-model))Options: :enabled, :effort (:low/:medium/:high), :budget-tokens, :return, :send
Convert messages between Java, EDN, and JSON for persistence:
(require '[langchain4clj.messages :as msg])
;; Java -> EDN
(msg/messages->edn (.messages memory))
;; => [{:type :user :contents [...]} {:type :ai :text "..."}]
;; EDN -> Java
(msg/edn->messages [{:type :user :text "Hi"}
{:type :ai :text "Hello!"}])
;; JSON (LangChain4j format)
(msg/messages->json messages) ;; Java -> JSON
(msg/json->messages json-str) ;; JSON -> Java
;; Persist to file
(spit "chat.edn" (pr-str (msg/messages->edn messages)))
(msg/edn->messages (edn/read-string (slurp "chat.edn")))See docs/MESSAGES.md for complete documentation.
LangChain4Clj offers two APIs for creating tools:
The simplest way to create tools with inline schema validation:
(require '[langchain4clj.tools :as tools])
;; Define a tool with defn-like syntax
(tools/deftool get-pokemon
"Fetches Pokemon information by name"
{:pokemon-name string?} ; Inline schema
[{:keys [pokemon-name]}]
(fetch-pokemon pokemon-name))
;; Multiple parameters
(tools/deftool compare-numbers
"Compares two numbers"
{:x number? :y number?}
[{:keys [x y]}]
(str x " is " (if (> x y) "greater" "less") " than " y))
;; Use in assistants - just pass the tool!
(def assistant
(assistant/create-assistant
{:model model
:tools [get-pokemon compare-numbers]}))Why deftool?
- Concise - 5 lines vs 15 lines with alternative approaches
- Safe - Schema is mandatory, impossible to forget
- Idiomatic - Looks like
defn - Simple - Inline types with predicates (
string?,int?,boolean?) - Automatic - Kebab-case to camelCase normalization built-in
For dynamic tool creation or complex schemas:
;; Using Clojure Spec (advanced schemas)
(def add-numbers
(tools/create-tool
{:name "add_numbers"
:description "Adds two numbers together"
:params-schema ::add-params ; Spec keyword
:fn (fn [{:keys [a b]}] (+ a b))}))
;; Using Plumatic Schema
(def weather-tool
(tools/create-tool
{:name "weather"
:description "Gets weather"
:params-schema {:location s/Str} ; Schema map
:fn get-weather}))
;; Using Malli
(def database-tool
(tools/create-tool
{:name "query"
:description "Query database"
:params-schema [:map [:query :string]] ; Malli vector
:fn query-db}))When to use create-tool:
- Dynamic tool generation at runtime
- Complex validation with custom predicates
- Integration with existing spec/schema/malli definitions
- Programmatic tool configuration
For integration with external systems that use JSON Schema (MCP servers, OpenAPI, etc.):
(require '[langchain4clj.tools.helpers :as helpers])
;; Define tools with JSON Schema
(def my-tools
(helpers/tools->map
[{:name "get_weather"
:description "Get weather for a location"
:parameters {:type :object
:properties {:location {:type :string}
:units {:enum ["celsius" "fahrenheit"]}}
:required [:location]}
:fn (fn [{:keys [location units]}]
(get-weather location (or units "celsius")))}
{:name "add_numbers"
:description "Add two numbers"
:parameters {:type :object
:properties {:a {:type :number}
:b {:type :number}}
:required [:a :b]}
:fn (fn [{:keys [a b]}] (+ a b))}]))
;; Use with AiServices
(-> (AiServices/builder MyInterface)
(.chatModel model)
(.tools my-tools)
(.build))See docs/TOOLS_HELPERS.md for complete documentation.
LangChain4Clj automatically handles the naming mismatch between Clojure's kebab-case convention and OpenAI's camelCase parameters:
;; Define your tool using idiomatic kebab-case
(s/def ::pokemon-name string?)
(s/def ::pokemon-params (s/keys :req-un [::pokemon-name]))
(def get-pokemon-tool
(tools/create-tool
{:name "get_pokemon"
:description "Fetches Pokemon information by name"
:params-schema ::pokemon-params
:fn (fn [{:keys [pokemon-name]}] ; Use kebab-case naturally!
(fetch-pokemon pokemon-name))}))
;; Both calling styles work automatically:
(tools/execute-tool get-pokemon-tool {:pokemon-name "pikachu"}) ; Clojure style
(tools/execute-tool get-pokemon-tool {"pokemonName" "pikachu"}) ; OpenAI style
;; When OpenAI calls your tool, it sends {"pokemonName": "pikachu"}
;; LangChain4Clj preserves the original AND adds kebab-case versions
;; Your code sees: {:pokemon-name "pikachu", "pokemonName" "pikachu"}
;; Spec validation works with :pokemon-name
;; Your destructuring works with :pokemon-nameBenefits:
- Write idiomatic Clojure code with kebab-case
- Full compatibility with OpenAI's camelCase parameters
- Spec/Schema/Malli validation works seamlessly
- Zero configuration required
- Handles deep nesting and collections automatically
(require '[langchain4clj.assistant :as assistant])
;; Create an assistant with memory and tools
(def my-assistant
(assistant/create-assistant
{:model model
:tools [calculator weather-tool]
:memory (assistant/create-memory {:max-messages 10})
:system-message "You are a helpful assistant"}))
;; Use it - memory and tools are automatic!
(my-assistant "What's 2+2?")
;; => "2 + 2 equals 4"
(my-assistant "What's the weather in Tokyo?")
;; => "The weather in Tokyo is currently..."
;; Memory persists between calls
(my-assistant "My name is Alice")
(my-assistant "What's my name?")
;; => "Your name is Alice"(require '[langchain4clj.structured :as structured])
;; Define a structured type
(structured/defstructured Recipe
{:name :string
:ingredients [:vector :string]
:prep-time :int})
;; Get structured data automatically
(get-recipe model "Create a pasta recipe")
;; => {:name "Spaghetti Carbonara"
;; :ingredients ["spaghetti" "eggs" "bacon" "cheese"]
;; :prep-time 20}(require '[langchain4clj.agents :as agents])
;; Create specialized agents
(def researcher (agents/create-agent {:model model :role "researcher"}))
(def writer (agents/create-agent {:model model :role "writer"}))
(def editor (agents/create-agent {:model model :role "editor"}))
;; Chain them together
(def blog-pipeline
(agents/chain researcher writer editor))
(blog-pipeline "Write about quantum computing")
;; Each agent processes in sequenceBuild production-ready systems with automatic failover between LLM providers:
(require '[langchain4clj.resilience :as resilience])
;; Basic failover with retry
(def resilient-model
(resilience/create-resilient-model
{:primary (llm/create-model {:provider :openai :api-key "..."})
:fallbacks [(llm/create-model {:provider :anthropic :api-key "..."})
(llm/create-model {:provider :ollama})]
:max-retries 2 ; Retry on rate limits/timeouts
:retry-delay-ms 1000})) ; 1 second between retries
;; Add circuit breaker for production
(def production-model
(resilience/create-resilient-model
{:primary (llm/create-model {:provider :openai :api-key "..."})
:fallbacks [(llm/create-model {:provider :anthropic :api-key "..."})
(llm/create-model {:provider :ollama})]
:max-retries 2
:retry-delay-ms 1000
:circuit-breaker? true ; Enable circuit breaker
:failure-threshold 5 ; Open after 5 failures
:success-threshold 2 ; Close after 2 successes
:timeout-ms 60000})) ; Try half-open after 60s
;; Use like any other model - automatic failover on errors!
(llm/chat production-model "Explain quantum computing")
;; Tries: OpenAI (with retries + CB) -> Anthropic (with retries + CB) -> Ollama
;; Works with all features: tools, streaming, JSON mode, etc.
(llm/chat production-model "Calculate 2+2" {:tools [calculator]})Error Handling:
- Retryable errors (429, 503, timeout) - Retry on same provider
- Recoverable errors (401, 404, connection) - Try next provider
- Non-recoverable errors (400, quota) - Throw immediately
Circuit Breaker States:
- Closed - Normal operation, all requests pass through
- Open - Too many failures, provider temporarily skipped
- Half-Open - Testing recovery after timeout
Why use failover?
- High availability - Never down due to single provider issues
- Cost optimization - Use cheaper fallbacks when primary fails
- Zero vendor lock-in - Switch providers seamlessly
- Production-ready - Handle rate limits and outages gracefully
- Circuit breaker - Prevent cascading failures in production
Complete Guide: See docs/RESILIENCE.md for comprehensive documentation including:
- Detailed configuration reference
- Production examples
- Monitoring & troubleshooting
- Best practices
- Advanced topics (streaming, tools, JSON mode)
;; Simple inline validation with deftool (RECOMMENDED)
(tools/deftool add-numbers
"Adds two numbers together with optional precision formatting"
{:a number?
:b number?
:precision int?} ; Optional params supported!
[{:keys [a b precision] :or {precision 2}}]
(format (str "%." precision "f") (double (+ a b))))
;; For complex validation, use Spec with create-tool
(s/def ::a number?)
(s/def ::b number?)
(s/def ::precision (s/and int? #(>= % 0) #(<= % 10))) ; 0-10 decimal places
(s/def ::calc-params (s/keys :req-un [::a ::b]
:opt-un [::precision]))
(def advanced-calc
(tools/create-tool
{:name "add_numbers"
:description "Adds two numbers with optional precision"
:params-schema ::calc-params
:fn (fn [{:keys [a b precision] :or {precision 2}}]
(format (str "%." precision "f") (double (+ a b))))}))- Full API Documentation
- Core Chat Guide
- Model Presets
- Environment Resolution
- JSON Schema Converter
- Assistant Tutorial
- Tool System Guide
- Chat Listeners
- Message Serialization
- Provider Failover & Resilience
- Examples
- RAG with document loaders and vector stores
- Token counting and cost estimation
- PgVector integration for production RAG
We welcome contributions! Check out:
- LangChain4j - The fantastic Java library we're wrapping
- Clojure community for feedback and ideas
Copyright © 2025 Fernando Olle
Distributed under the Eclipse Public License version 2.0.
