Skip to content

Langchain Integration #6

@DevilsAutumn

Description

@DevilsAutumn

Description

We want to add LangChain integration so the library can use LangChain internally to manage multiple model calls more cleanly. Right now, handling different providers and switching between models requires a lot of custom logic. LangChain already provides a unified interface for calling various LLMs, handling prompts, and managing model configs.

By integrating LangChain under the hood, we can simplify how we perform repeated or batch model calls during benchmarking. This will make the codebase cleaner and also make it easier to support new models in the future.

What Needs to Be Done

  • Add a small internal wrapper that uses LangChain’s LLM interfaces for sending requests.
  • Support batch or repeated calls via LangChain for benchmarks that require multiple runs.
  • Capture latency, tokens, and responses consistently across different providers.
  • Ensure LangChain integrations don’t break existing direct-provider integrations.

Why This Is Useful

Using LangChain internally saves us from writing provider-specific code repeatedly. It becomes much easier to support new models, run multiple calls, and maintain consistent behavior across providers. This also future-proofs the library and reduces maintenance overhead.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions