The Conversation Evaluation project uses Large Language Models (LLMs) as a judge to evaluate conversational agents using user simulation. This helps to reduce the manual effort required for evaluation.
This project is for anyone who wants to reduce the manual effort involved in evaluating conversational agents.
- Simulation: Simulate conversations with a user simulation.
- AI Library Support: Supports majority AI inference libraries Together, OpenAI, Google and served vLLM, OLLAMA models.
- LLM as Judge: Uses LLMs to evaluate conversational agents.
To install the project, install the required packages from the requirements file:
pip install -r requirements.txtTo run the evaluation, you need to provide a configuration file in YAML or JSON format to the main.py script.
Create a .env file define the api_keys there.
TOGETHER_API_KEY="your_together_api_key"
OPENAI_API_KEY="your_openai_key"python main.py config.yamlIt will return a json file containing the evaluation results.
A yaml config file is created you can see the reference config.yaml file and output result.json
#TODO Define the updated readme file Create a UI Dashboard Guiding to use conversation_insight_generation tool.