Skip to content

Local Caching for LLM Columns #360

@JaoMarcos

Description

@JaoMarcos

Priority Level

Medium (Nice to have)

Is your feature request related to a problem? Please describe.

Training models and evaluating whether synthetic data improves performance requires generating significant amounts of data. This experimentation phase is expensive and time-consuming because it often involves rerunning the same prompts across multiple pipeline iterations. If novel outputs are not required for identical inputs, we can save time and money by implementing a caching system.

Describe the solution you'd like

I would like a simple cache option on LLM columns (Text, Structured, Code, and Judge).

Describe alternatives you've considered

No response

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions