Skip to content

financial-datasets/llm-evaluations

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Evaluations

A codebase for evaluating Large Language Models (LLMs) on financial research tasks. Contains tools and methodologies to assess LLM performance, accuracy, and reliability in financial analysis.

Twitter Follow

📋 Table of Contents

🚀 Quick Start

Prerequisites

  • Python 3.11+
  • uv package manager

Installation

  1. Clone the repository

    git clone https://github.com/financial-datasets/llm-evaluations.git
    cd llm-evaluations
  2. Install dependencies

    uv sync
  3. Set up environment variables

    cp .env.example .env
    # Edit .env with your API keys and configuration
  4. Run the example

    uv run main.py

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

📝 License

This project is licensed under the MIT License

About

LLM evaluations on various financial research tasks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages