PBE-Meets-LLM explores the intersection of Programming by Example (PBE) and Large Language Models (LLMs), aiming to enhance the capabilities of LLMs in synthesizing programs from input-output examples.
Programming by Example is a paradigm where programs are synthesized based on provided input-output pairs. This project investigates how LLMs can be leveraged to perform PBE tasks more effectively, potentially improving upon traditional methods in terms of accuracy and generalization.
- Python 3.10
- For LLama2: PyTorch Transformers
- For Foofah and Hybrid Docker
-
Clone the repository:
git clone https://github.com/illinoisdata/PBE-Meets-LLM.git cd PBE-Meets-LLM -
Install the required packages:
pip install -r requirements.txt
To run experiments, use the provided scripts in the interface/ directory. For example if one to run one shot knowledge prompt with gpt-4o model, one can use the following command:
cd interface
python one_shot_gpt.py --api_key sk-... --model gpt-4o --prompt_type knowledge --test_file foofahnevigate to the prose-api folder under model
cd interface/prose
bash run.sh
Since Foofah was built with python2 we will be using Docker to run it. Please follow the following files to run the experiments.
All experiments are evaluated using the following exact matrics base on one_shot or multi_try the evaluation could be run by the provided scripts in the evaluation/ directory.