Skip to content
forked from llm4s/llm4s

Agentic and LLM Programming in Scala

License

Notifications You must be signed in to change notification settings

ayushk687/llm4s

 
 

Repository files navigation

LLM4s

Overview

LLM4s is an open-source initiative under the LLM4 organization focused on building, documenting, and sharing practical resources for working with Large Language Models (LLMs). The project aims to make LLM concepts accessible to students, researchers, and developers through clean documentation, real-world examples, and beginner-to-advanced workflows.

Whether you are new to LLMs or exploring advanced topics like RAG and evaluation, LLM4s is designed to help you learn by doing.


Goals

  • Provide clear, beginner-friendly explanations of LLM concepts
  • Share practical examples for real-world LLM usage
  • Encourage open-source collaboration in the LLM ecosystem
  • Support students and contributors preparing for research programs and GSoC

Features

  • 📘 LLM fundamentals and prompt engineering guides
  • 🧪 Evaluation techniques for LLM outputs
  • 🧠 Examples using OpenAI-compatible APIs
  • 📂 Modular structure for easy contributions
  • 🌱 Beginner-friendly issues and documentation
  • Multi-Provider Support: Connect seamlessly to multiple LLM providers (OpenAI, Anthropic, Google Gemini, Azure, Ollama, DeepSeek).
  • Execution Environments: Run LLM-driven operations in secure, containerized or non-containerized setups.
  • Error Handling: Robust mechanisms to catch, log, and recover from failures gracefully.
  • MCP Support: Integration with Model Context Protocol for richer context management.
  • Agent Framework: Build single or multi-agent workflows with standardized interfaces.
  • Multimodal Generation: Support for text, image, voice, and other LLM modalities.
  • RAG (Retrieval-Augmented Generation): Built-in tools for search, embedding, retrieval workflows, and RAGAS evaluation with benchmarking harness.
  • Observability: Detailed trace logging, monitoring, and analytics for debugging and performance insights.

Architecture

        ┌───────────────────────────┐
        │    LLM4S API Layer        │
        └──────────┬────────────────┘
                   │
          Multi-Provider Connector
        (OpenAI | Anthropic | DeepSeek | ...)
                   │
         ┌─────────┴─────────┐
         │ Execution Manager │
         └─────────┬─────────┘
                   │
        ┌──────────┴──────────┐
        │   Agent Framework   │
        └──────────┬──────────┘
                   │
      ┌────────────┴────────────┐
      │  RAG Engine + Tooling   │
      └────────────┬────────────┘
                   │
     ┌─────────────┴─────────────┐
     │   Observability Layer     │
     └───────────────────────────┘


---

## Project Structure

LLM4s/ │ ├── docs/ # Conceptual documentation │ ├── prompt_engineering.md │ ├── llm_evaluation.md │ ├── examples/ # Practical examples │ ├── simple_llm_demo.py │ ├── README.md └── CONTRIBUTING.md


---

## Getting Started

### Prerequisites

* Python 3.8+
* Basic knowledge of Python and Git

### Installation

```bash
git clone https://github.com/LLM4/LLM4s.git
cd LLM4s
pip install -r requirements.txt

Example Usage

from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Explain LLMs simply."}]
)

print(response.choices[0].message.content)

How to Contribute

We welcome contributions of all sizes! 🚀

  1. Fork the repository
  2. Create a new branch (issue-<number>-feature-name)
  3. Make your changes
  4. Commit with a clear message
  5. Open a Pull Request linked to an issue

Check out CONTRIBUTING.md for detailed guidelines.


Using DeepSeek

LLM_MODEL=deepseek/deepseek-chat
DEEPSEEK_API_KEY=<your_deepseek_api_key>
# Optional: DEEPSEEK_BASE_URL defaults to https://api.deepseek.com


## Good First Issues

Look for issues labeled:

* `good first issue`
* `documentation`
* `LLM`

These are perfect for new contributors.

---

## Roadmap

* [ ] Add Retrieval-Augmented Generation (RAG) examples
* [ ] Add LLM evaluation benchmarks
* [ ] Add fine-tuning walkthroughs
* [ ] Add agent-based LLM examples

---

## Community & Support

* Open an issue for bugs or feature requests
* Be respectful and collaborative
* Follow open-source best practices

---

## License

This project is licensed under the **MIT License**.

---

## Acknowledgements

Thanks to the open-source community and contributors who make LLM learning accessible for everyone ❤️

Our goal is to implement Scala equivalents of popular Python LLM frameworks, with **multi-provider, multimodal, and observability-first design** as core principles.

### 📋 Detailed Roadmap

**For the full roadmap including core framework features and agent phases, see the [LLM4S Roadmap](https://llm4s.org/reference/roadmap)**

The roadmap covers:
- **Core Framework Features**: Multi-provider LLM, image generation, speech, embeddings, tools, MCP
- **Agent Framework Phases**: Conversations, guardrails, handoffs, memory, streaming, built-in tools
- **Production Pillars**: Testing, API Stability, Performance, Security, Documentation, Observability
- **Path to v1.0.0**: Structured path to production release

### High-Level Goals

- [ ] Single API access to multiple LLM providers (like LiteLLM) - **llmconnect***Complete*
- [ ] Comprehensive toolchain for building LLM apps (LangChain/LangGraph equivalent)
  - [x] Tool calling ✅ *Complete*
  - [x] RAG search & retrieval ✅ *Complete* (vector memory, embeddings, document Q&A)
  - [x] RAG evaluation & benchmarking ✅ *Complete* (RAGAS metrics, systematic comparison)
  - [x] Logging, tracking, and monitoring ✅ *Complete*
- [ ] Agentic framework (like PydanticAI, CrewAI)
  - [x] Single-agent workflows ✅ *Complete*
  - [x] Multi-agent handoffs ✅ *Complete*
  - [x] Memory system (in-memory, SQLite, vector) ✅ *Complete*
  - [x] Streaming events ✅ *Complete*
  - [x] Built-in tools module ✅ *Complete*
  - [ ] DAG-based orchestration 🚧 *In Progress*
- [ ] Tokenization utilities (Scala port of tiktoken) ✅ *Complete*
- [ ] Examples for all supported modalities and workflows ✅ *Complete*
- [ ] Stable platform with extensive test coverage 🚧 *In Progress*
- [ ] Scala Coding SWE Agent - perform SWE Bench–type tasks on Scala codebases
  - [ ] Code maps, code generation, and library templates

## Tool Calling

Tool calling is a critical integration - designed to work seamlessly with **multi-provider support** and **agent frameworks**.
We use ScalaMeta to auto-generate tool definitions, support dynamic mapping, and run in **secure execution environments**.

Tools can run:

- In **containerized sandboxes** for isolation and safety.
- In **multi-modal pipelines** where LLMs interact with text, images, and voice.
- With **observability hooks** for trace analysis.

### Tool Signature Generation

Using ScalaMeta to automatically generate tool definitions from Scala methods:

```scala
/** My tool does some funky things with a & b...
 * @param a The first thing
 * @param b The second thing
 */
def myTool(a: Int, b: String): ToolResponse = {
  // Implementation
}

ScalaMeta extracts method parameters, types, and documentation to generate OpenAI-compatible tool definitions.

Tool Call Mapping

Mapping LLM tool call requests to actual method invocations through:

  • Code generation
  • Reflection-based approaches
  • ScalaMeta-based parameter mapping

Secure Execution

Tools run in a protected Docker container environment to prevent accidental system damage or data leakage.

Comprehensive Tracing & Observability

Tracing isn’t just for debugging - it’s the backbone of understanding model behavior.LLM4S’s observability layer includes:

  • Detailed token usage reporting
  • Multi-backend trace output (Langfuse, console, none)
  • Agent state visualization
  • Integration with monitoring dashboards

Tracing Modes

Configure tracing behavior using the TRACING_MODE environment variable:

# Send traces to Langfuse (default)
TRACING_MODE=langfuse
LANGFUSE_PUBLIC_KEY=pk-lf-your-key
LANGFUSE_SECRET_KEY=sk-lf-your-secret

# Print detailed traces to console with colors and token usage
TRACING_MODE=print

# Disable tracing completely
TRACING_MODE=none

Basic Usage

import org.llm4s.trace.{ EnhancedTracing, Tracing }

// Create tracer from environment (Result), fallback to console tracer
val tracer: Tracing = EnhancedTracing
  .createFromEnv()
  .fold(_ => Tracing.createFromEnhanced(new org.llm4s.trace.EnhancedConsoleTracing()), Tracing.createFromEnhanced)

// Trace events, completions, and token usage
tracer.traceEvent("Starting LLM operation")
tracer.traceCompletion(completion, completion.model) // prefer the model reported by the API
tracer.traceTokenUsage(tokenUsage, completion.model, "chat-completion")
tracer.traceAgentState(agentState)

Usage using starter kit llm4s.g8

A carefully crafted starter kit to unlock the power of llm4s

Note: The LLM4S template has moved to its own repository for better maintainability and independent versioning.

The llm4s.g8 starter kit helps you quickly create AI-powered applications using llm4s. It is a starter kit for building AI-powered applications using llm4s with improved SDK usability and developer ergonomics. You can now spin up a fully working scala project with a single sbt command. The starter kit comes pre-configured with best practices, prompt execution examples, CI, formatting hooks, unit testing, documentation, and cross-platform support.

Template Repository: github.com/llm4s/llm4s.g8

Using sbt, do:

sbt new llm4s/llm4s.g8 \
--name=<your.project.name> \
--package=<your.organization> \
--version=0.1.0-SNAPSHOT \
--llm4s_version=<llm4s.version> \ # 0.1.1 is the latest version at the time of writing
--scala_version=<scala.version> \ # 2.x.x or Scala 3.x.x
--munit_version=<munit.version> \ # 1.1.1 is the latest version at the time of writing
--directory=<your.project.name> \
--force

to create new project.

For more information about the template, including compatibility matrix and documentation, visit the template repository. Use the comprehensive documentation to get started with the project using starter kit.


Configuration: Unified Loaders

llm4s exposes a single configuration flow with sensible precedence:

  • Precedence: -D system properties > application.conf (if your app provides it) > reference.conf defaults.
  • Environment variables are wired via ${?ENV} in reference.conf (no .env reader required).

Preferred typed entry points (PureConfig-backed via Llm4sConfig):

  • Provider / model:
    • Llm4sConfig.provider(): Result[ProviderConfig] – returns the typed provider config (OpenAI/Azure/Anthropic/Ollama).
    • LLMConnect.getClient(config: ProviderConfig): Result[LLMClient] – builds a client from a typed config.
  • Tracing:
    • Llm4sConfig.tracing(): Result[TracingSettings] – returns typed tracing settings.
    • EnhancedTracing.create(settings: TracingSettings): EnhancedTracing – builds an enhanced tracer from typed settings.
    • Tracing.create(settings: TracingSettings): Tracing – builds a legacy Tracing from typed settings.
  • Embeddings:
    • Llm4sConfig.embeddings(): Result[(String, EmbeddingProviderConfig)] – returns (provider, config) with validation.
    • EmbeddingClient.from(provider: String, cfg: EmbeddingProviderConfig): Result[EmbeddingClient] – builds an embeddings client from typed config.

Recommended usage patterns:

  • Model name for display: Llm4sConfig.provider().map(_.model) or prefer completion.model from API responses.
  • Tracing:
    • For enhanced tracing: Llm4sConfig.tracing().map(EnhancedTracing.create).
    • For legacy Tracing: Llm4sConfig.tracing().map(Tracing.create).
  • Workspace (samples): WorkspaceConfigSupport.load() to get workspaceDir, imageName, hostPort, traceLogPath.
  • Embeddings sample (samples): EmbeddingUiSettings.loadFromEnv, EmbeddingTargets.loadFromEnv, EmbeddingQuery.loadFromEnv (sample helpers backed by Llm4sConfig).

Config Keys → Typed Settings

Use these loaders to convert flat keys and HOCON paths into typed, validated settings used by the code:

  • LLM model selection

    • Keys: llm4s.llm.model or LLM_MODEL
    • Type: ProviderConfig (with provider-specific subtypes)
    • Loader: Llm4sConfig.provider() + LLMConnect.getClient(...)
  • Tracing configuration

    • Keys: llm4s.tracing.mode | TRACING_MODE, LANGFUSE_URL, LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_ENV, LANGFUSE_RELEASE, LANGFUSE_VERSION
    • Type: TracingSettings
    • Loader: Llm4sConfig.tracing() → then EnhancedTracing.create or Tracing.create
  • Workspace settings (samples)

    • Keys: llm4s.workspace.dir | WORKSPACE_DIR, llm4s.workspace.image | WORKSPACE_IMAGE, llm4s.workspace.port | WORKSPACE_PORT, llm4s.workspace.traceLogPath | WORKSPACE_TRACE_LOG
    • Type: WorkspaceSettings
    • Loader: WorkspaceConfigSupport.load()
  • Embeddings: inputs and UI (samples)

    • Input paths: EMBEDDING_INPUT_PATHS or EMBEDDING_INPUT_PATHEmbeddingTargets.loadFromEnv()EmbeddingTargets
    • Query: EMBEDDING_QUERYEmbeddingQuery.loadFromEnv()EmbeddingQuery
    • UI knobs: MAX_ROWS_PER_FILE, TOP_DIMS_PER_ROW, GLOBAL_TOPK, SHOW_GLOBAL_TOP, COLOR, TABLE_WIDTHEmbeddingUiSettings.loadFromEnv()EmbeddingUiSettings
  • Embeddings: provider configuration

    • Key: EMBEDDING_PROVIDER or llm4s.embeddings.provider (required)
    • Supported providers: openai, voyage, ollama
    • Type: (String, EmbeddingProviderConfig)
    • Loader: Llm4sConfig.embeddings()
    • Provider-specific keys:
      • OpenAI: OPENAI_EMBEDDING_BASE_URL, OPENAI_EMBEDDING_MODEL, OPENAI_API_KEY
      • Voyage: VOYAGE_EMBEDDING_BASE_URL, VOYAGE_EMBEDDING_MODEL, VOYAGE_API_KEY
      • Ollama (local): OLLAMA_EMBEDDING_BASE_URL (default: http://localhost:11434), OLLAMA_EMBEDDING_MODEL
  • Provider API keys and endpoints

    • Keys: OPENAI_API_KEY, OPENAI_BASE_URL, ANTHROPIC_API_KEY, ANTHROPIC_BASE_URL, AZURE_API_BASE, AZURE_API_KEY, AZURE_API_VERSION, OLLAMA_BASE_URL, DEEPSEEK_API_KEY, DEEPSEEK_BASE_URL
    • Type: concrete ProviderConfig (e.g., OpenAIConfig, AnthropicConfig, AzureConfig, OllamaConfig, DeepSeekConfig)
    • Loader: Llm4sConfig.provider() → then provider-specific config constructors

Tracing

  • Configure mode via llm4s.tracing.mode (default: console). Supported: langfuse, console, noop.
  • Override with env: TRACING_MODE=langfuse (or system property -Dllm4s.tracing.mode=langfuse).
  • Build tracers:
    • Typed: Llm4sConfig.tracing().map(EnhancedTracing.create)Result[EnhancedTracing]
    • Legacy bridge: Llm4sConfig.tracing().map(Tracing.create)
    • Low-level: LangfuseTracing.fromEnv()Result[LangfuseTracing]

Example (no application.conf required):

sbt -Dllm4s.llm.model=openai/gpt-4o -Dllm4s.openai.apiKey=sk-... "samples/runMain org.llm4s.samples.basic.BasicLLMCallingExample"

Or with environment variables (picked up via reference.conf):

export LLM_MODEL=openai/gpt-4o
export OPENAI_API_KEY=sk-...
sbt "samples/runMain org.llm4s.samples.basic.BasicLLMCallingExample"

Continuous Integration (CI)

LLM4S uses GitHub Actions for continuous integration to ensure code quality and compatibility across different platforms and Scala versions.

CI Workflows

Main CI Pipeline (ci.yml)

Our unified CI workflow runs on every push and pull request to main/master branches:

  • Quick Checks: Fast-failing checks for code formatting and compilation
  • Cross-Platform Testing: Tests run on Ubuntu and Windows with Scala 2.13.16 and 3.7.1
  • Template Validation: Verifies the g8 template works correctly
  • Caching: Optimized caching strategy with Coursier for faster builds

Claude Code Review (claude-code-review.yml)

Automated AI-powered code review for pull requests:

  • Automatic Reviews: Trusted PRs get automatic Claude reviews
  • Security: External PRs require manual trigger by maintainers
  • Manual Trigger: Maintainers can request reviews with @claude comment

Release Pipeline (release.yml)

Automated release process triggered by version tags (format: v0.1.11):

  • Tag Format: Must use v prefix (e.g., v0.1.11, not 0.1.11)
  • Pre-release Checks: Runs full CI suite before publishing
  • GPG Signing: Artifacts are signed for security
  • Maven Central: Publishes to Sonatype/Maven Central

See RELEASE.md for detailed release instructions.

Running CI Locally

You can run the same checks locally before pushing:

# Check formatting
sbt scalafmtCheckAll

# Compile all Scala versions
sbt +compile

# Run all tests
sbt +test

# Full build (compile + test)
sbt buildAll

Hands-On Sessions & Live Collaboration

Stay hands-on with LLM4S! Join us for interactive mob programming sessions, live debugging, and open-source collaboration. These events are great for developers, contributors, and anyone curious about Scala + GenAI.


LLM4S Dev Hour Banner
🗓️ Weekly live coding and collaboration during LLM4S Dev Hour, join us every Sunday on Discord!


Date Session Title Description Location Hosts Details URL Featured In
20-Jul-2025 onwards (Weekly Sundays) 🗓️ LLM4S Dev Hour - Weekly Live Coding & Collaboration A weekly mob programming session where we code, debug, and learn together - open to all!

📌 Updates are shared by the host in the #llm4s-dev-hour Discord channel after each session.  Weekly changing Luma invite link (for scheduling in your calender)
Online, London, UK (9am local time) Kannupriya Kalra, Rory Graves LinkedIn

Reddit1

Reddit2

Bluesky

Mastodon

X/Twitter
Scala Times – Issue #537

📢 Talks & Presentations

See the talks being given by maintainers and open source developers globally and witness the engagement by developers around the world.

Stay updated with talks, workshops, and presentations about LLM4S happening globally. These sessions dive into the architecture, features, and future plans of the project.

London Scala Talks 2025 – Rory Szork Demo Slide Bay Area Scala Conference 2025 – Introductory Slide Functional World 2025 Poland – Scalac Conference Talk Scala India 2025 Talk Banner Dallas Scala Enthusiasts Conference 2025 Talk Banner London Scala Talks 2025 – LLM4S Session Banner London Scala Talks 2025 – Kannupriya Multimodal Slide Scala Days 2025 Official Talk Banner Scala Days 2025 – LLM4S Team Photo Scala Days 2025 – Szork Innerworld Demo Slide Zurich Scala Enthusiasts 2025 Banner Google Summer of Code 2025 – Lightning Talks hosted by Scala Center, Switzerland ICFP2025_Mentoring_in_the_Scala_Ecosystem_Insights_from_GSoC_Kannupriya_Kalra, Singapore GenAI_London_Building_Reliable_AI_Systems, London Google Summer of Code 2025 – LLM4S Mentor Summit Banner Oaisys AI Practitioners Conference 2025 – LLM4S Building Reliable AI Systems (Kannupriya Kalra) AI Compute & Hardware 2025 Conference - Functional Intelligence: Building Scalable AI Systems for the Hardware Era


Snapshots from LLM4S talks held around the world 🌍.



Upcoming & Past Talks

Date Event/Conference Talk Title Location Speaker Name Details URL Recording Link URL Featured In
25-Feb-2025 Bay Area Scala Let's Teach LLMs to Write Great Scala! (Original version) Tubi office, San Francisco, CA, USA 🇺🇸 Kannupriya Kalra Event Info , Reddit Discussion , Mastodon Post , Bluesky Post , X/Twitter Post , Meetup Event Watch Recording
20-Apr-2025 Scala India Let's Teach LLMs to Write Great Scala! (Updated from Feb 2025) India 🇮🇳 Kannupriya Kalra Event Info , Reddit Discussion , X/Twitter Post Watch Recording
28-May-2025 Functional World 2025 by Scalac Let's Teach LLMs to Write Great Scala! (Updated from Apr 2025) Gdansk, Poland 🇵🇱 Kannupriya Kalra LinkedIn Post 1 , LinkedIn Post 2 , Reddit Discussion , Meetup Link , X/Twitter Post Watch Recording Scalendar (May 2025) , Scala Times 1 , Scala Times 2
13-Jun-2025 Dallas Scala Enthusiasts Let's Teach LLMs to Write Great Scala! (Updated from May 2025) Dallas, Texas, USA 🇺🇸 Kannupriya Kalra Meetup Event , LinkedIn Post , X/Twitter Post , Reddit Discussion , Bluesky Post , Mastodon Post Watch Recording Scalendar (June 2025)
13-Aug-2025 London Scala Users Group Scala Meets GenAI: Build the Cool Stuff with LLM4S The Trade Desk office, London, UK 🇬🇧 Kannupriya Kalra, Rory Graves Meetup Event , X/Twitter Post , Bluesky Post , LinkedIn Post Recording will be posted once the event is done Scalendar (August 2025)
21-Aug-2025 Scala Days 2025 Scala Meets GenAI: Build the Cool Stuff with LLM4S SwissTech Convention Center,EPFL campus, Lausanne, Switzerland 🇨🇭 Kannupriya Kalra, Rory Graves Talk Info , LinkedIn Post , X/Twitter Post , Reddit Discussion , Bluesky Post , Mastodon Post Recording will be posted once the event is done Scala Days 2025: August in Lausanne – Code, Community & Innovation , Scalendar (August 2025) , Scala Days 2025 LinkedIn Post , Scala Days 2025 Highlights , Scala Days 2025 Wrap , Scala Days 2025 Recap – A Scala Community Reunion , Xebia Scala days blog
25-Aug-2025 Zürich Scala Enthusiasts Fork It Till You Make It: Career Building with Scala OSS Rivero AG, ABB Historic Building, Elias-Canetti-Strasse 7, Zürich, Switzerland 🇨🇭 Kannupriya Kalra Meetup Event , LinkedIn Post , X/Twitter Post , Bluesky Post , Mastodon Post , Reddit Discussion Recording will be posted once the event is done Scalendar (August 2025)
18-Sept-2025 Scala Center Talks Lightning Talks Powered by GSoC 2025 for Scala EPFL campus, Lausanne, Switzerland 🇨🇭 Kannupriya Kalra Event Invite , LinkedIn Post , Scala Center's LinkedIn Post , X/Twitter Post , Mastodon Post , Bluesky Post , Reddit Discussion Recording will be posted once the event is done, View LLM4S Slides 1, 2, 3, 4, Download LLM4S Slides 1, 2, 3, 4
12-18-Oct-2025 ICFP/SPLASH 2025 (The Scala Workshop 2025) Mentoring in the Scala Ecosystem: Insights from Google Summer of Code Peony West, Marina Bay Sands Convention Center, Singapore 🇸🇬 Kannupriya Kalra ICFP/SPLASH 2025 Event Website , The Scala Workshop 2025 Schedule , LinkedIn Post , X/Twitter Post , Mastodon Post , Bluesky Post , Reddit Discussion Watch Recording
24-Oct-2025 GEN AI London 2025 Building Reliable AI systems: From Hype to Practical Toolkits Queen Elizabeth II Center in the City of Westminster, London, UK 🇬🇧 Kannupriya Kalra GEN AI London Event Website , GEN AI London 2025 Schedule , LinkedIn Post 1 , LinkedIn Post 2 , LinkedIn Post 3 , X/Twitter Post , Mastodon Post , Bluesky Post , Reddit Discussion Recording will be posted once the event is done, View Slides, Download Slides
23-25-Oct-2025 Google Summer Of Code Mentor Summit 2025 LLM4S x GSoC 2025: Engineering GenAI Agents in Functional Scala Google Office, Munich, Erika-Mann-Str. 33 · 80636 München, Germany 🇩🇪 Kannupriya Kalra Event Website Recording will be posted once the event is done. View Scala Center Slides, Download Scala Center Slides, View GSoC Mentor Summit All Speakers Slides, Download GSoC Mentor Summit All Speakers Slides
29-30-Nov-2025 Oaisys Conf 2025: AI Practitioners Conference LLM4S: Building Reliable AI Systems in the JVM Ecosystem MCCIA, Pune, India 🇮🇳 Kannupriya Kalra, Shubham Vishwakarma Event Website, LinkedIn Post, X/Twitter Post, Mastodon Post, Bluesky Post, Reddit Post Recording will be posted once the event recordings are available. View Slides, Download Slides
10-Dec-2025 AI Compute & Hardware Conference 2025 Functional Intelligence: Building Scalable AI Systems for the Hardware Era Samsung HQ, San Jose, California, USA 🇺🇸 Kannupriya Kalra Event Website, Event details on Meetup, LinkedIn Post, X/Twitter Post, Mastodon Post, Bluesky Post, Reddit Post Recording will be posted once the event recordings are available. View Slides, Download Slides

📝 Want to invite us for a talk or workshop? Reach out via our respective emails or connect on Discord: https://discord.gg/4uvTPn6qww

Why You Should Contribute to LLM4S?

  • Build AI-powered applications in a statically typed, functional language designed for large systems.
  • Help shape the Scala ecosystem’s future in the AI/LLM space.
  • Learn modern LLM techniques like zero-shot prompting, tool calling, and agentic workflows.
  • Collaborate with experienced Scala engineers and open-source contributors.
  • Gain real-world experience working with Dockerized environments and multi-LLM providers.
  • Contribute to a project that offers you the opportunity to become a mentor or contributor funded by Google through its Google Summer of Code (GSoC) program.
  • Join a global developer community focused on type-safe, maintainable AI systems.

Contributing

Interested in contributing? Start here:

LLM4S GitHub Issues: https://lnkd.in/eXrhwgWY

Join the Community

Want to be part of developing this and interact with other developers? Join our Discord community!

LLM4S Discord: https://lnkd.in/eb4ZFdtG

Google Summer of Code (GSoC)


GSoC Logo
LLM4S was selected for GSoC 2025 under the Scala Center Organisation.


This project is also participating in Google Summer of Code (GSoC) 2025! If you're interested in contributing to the project as a contributor, check out the details here:

👉 Scala Center GSoC Ideas: https://lnkd.in/enXAepQ3

To know everything about GSoC and how it works, check out this talk:

🎥 GSoC Process Explained: https://lnkd.in/e_dM57bZ

To learn about the experience of GSoC contributors of LLM4S, check out their blogs in the section below.

📚 Explore Past GSoC Projects with Scala Center: https://www.gsocorganizations.dev/organization/scala-center/ This page includes detailed information on all GSoC projects with Scala Center from past years - including project descriptions, code repositories, contributor blogs, and mentor details.

👥 GSoC Contributor Onboarding Resources

Hello GSoCers and future GSoC aspirants! Here are some essential onboarding links to help you collaborate and stay organized within the LLM4S community.

  • 🔗 LLM4S GSoC GitHub Team:You have been invited to join the LLM4S GitHub team for GSoC participants. Accepting this invite will grant you access to internal resources and coordination tools.👉 https://github.com/orgs/llm4s/teams/gsoc/members
  • 📌 Private GSoC Project Tracking Board: Once you're part of the team, you will have access to our private GSoC tracking board. This board helps you track tasks, timelines, and project deliverables throughout the GSoC period. 👉 https://github.com/orgs/llm4s/projects/3

GSoC 2025: Google Open Source Funded Project Ideas from LLM4S

LLM4S - Implement an agentic toolkit for Large Language Models

LLM4S - RAG in a box

LLM4S - Support image, voice and other LLM modalites

LLM4S - Tracing support

Feel free to reach out to the contributors or mentors listed for any guidance or questions related to GSoC 2026.


Contributors_banner_shoutout
Contributors selected across the globe for GSoC 2025 program.


🚧 Behind the Build: Blogs & Series

We’ve got exciting news to share - Scalac, one of the leading Scala development companies, has officially partnered with LLM4S for a dedicated AI-focused blog series!

This collaboration was initiated after our talk at Functional World 2025, and it’s now evolving into a full-fledged multi-part series and an upcoming eBook hosted on Scalac’s platform. The series will combine practical Scala code, GenAI architecture, and reflections from the LLM4S team - making it accessible for Scala developers everywhere who want to build with LLMs.

📝 The first post is already drafted and under review by the Scalac editorial team. We’re working together to ensure this content is both technically insightful and visually engaging.

🎉 Thanks to Matylda Kamińska, Rafał Kruczek, and the Scalac marketing team for this opportunity and collaboration!

Stay tuned - the series will be published soon on scalac.io/blog, and we’ll link it here as it goes live.


LLM4S x Scalac Collaboration
LLM4S blogs powered by Scalac.



📖 Community blogs & articles

Technical deep-dives, production stories, and insights from LLM4S contributors. These articles chronicle real-world implementations, architectural decisions, and lessons learned from building type-safe LLM infrastructure in Scala.

Author Title Topics Covered Part of Series Link
Vitthal Mirji llm4s: type-safe LLM infrastructure for Scala that stay 1-step ahead of everything Introduction to llm4s, why type safety matters, runtime → compile-time errors, provider abstraction, agent framework overview Building type-safe LLM infrastructure (Part 1/7) Read article
Vitthal Mirji Developer experience: How we turned 20-minute llm4s setup into 60 seconds Giter8 template creation, onboarding friction elimination, starter kit design, 95% time savings (PR #101) Building type-safe LLM infrastructure (Part 2/7) Read article
Vitthal Mirji Production error handling: When our LLM pipeline threw 'Unknown error' for everything Type-safe error hierarchies, ADTs, Either-based error handling, 60% faster debugging (PR #137) Building type-safe LLM infrastructure (Part 3/7) Read article
Vitthal Mirji Error hierarchy refinement: Smart constructors and the code we deleted Smart constructors, trait-based error classification, eliminating boolean flags, -263 lines (PR #197) Building type-safe LLM infrastructure (Part 4/7) Read article
Vitthal Mirji Type system upgrades: The 'asistant' typo that compiled and ran in production String literals → MessageRole enum, 6 type classes, compile-time typo prevention, 43-file migration (PR #216) Building type-safe LLM infrastructure (Part 5/7) Read article
Vitthal Mirji Safety refactor: The P1 streaming bug that showed wrong errors and 47 try-catch blocks Eliminating 47 try-catch blocks, safety utilities, resource management, streaming bug fix, -260 net lines (PR #260) Building type-safe LLM infrastructure (Part 6/7) Read article
Vitthal Mirji 5 Production patterns from building llm4s: What actually works Pattern-based design, type-safe foundations, developer experience first, migration playbooks, production lessons learned Building type-safe LLM infrastructure (Part 7/7) Read article

💡 You can contribute writing blogs Share your LLM4S experience, architectural insights, or production lessons. Reach out to maintainers on Discord or create a PR updating this table.


✍️ Blogs Powered by GSoC

Our Google Summer of Code (GSoC) 2025 contributors have actively documented their journeys, sharing insights and implementation deep-dives from their projects. These blog posts offer valuable perspectives on how LLM4S is evolving from a contributor-first lens.

Contributor Blog(s) Project
Elvan Konukseven elvankonukseven.com/blog Agentic Toolkit for LLMs
Gopi Trinadh Maddikunta Main Blog
Scala at Light Speed – Part 1
Scala at Light Speed – Part 2
RAG in a Box
Anshuman Awasthi Anshuman's GSoC Journey Multimodal LLM Support
Shubham Vishwakarma Cracking the Code: My GSoC 2025 Story Tracing and Observability

💡 These blogs reflect first-hand experience in building real-world AI tools using Scala, and are great resources for future contributors and researchers alike.

📚 Where Innovation Speaks: Articles by GSoC's Brightest

Elvan Konukseven

Gopi Trinadh Maddikunta

Anshuman Awasthi

Shubham Vishwakarma

Maintainers

Want to connect with maintainers? The LLM4S project is maintained by:

License

This project is licensed under the MIT License - see the LICENSE file for details.


About

Agentic and LLM Programming in Scala

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Scala 99.7%
  • Shell 0.3%