Skip to content

Latest commit

 

History

History
52 lines (35 loc) · 2.54 KB

File metadata and controls

52 lines (35 loc) · 2.54 KB

FlowerTune LLM on Code Dataset

This directory conducts federated instruction tuning with a pretrained Llama-3.2-3B model on a Code dataset. We use Flower Datasets to download, partition and preprocess the dataset. Flower's Simulation Engine is used to simulate the LLM fine-tuning process in federated way, which allows users to perform the training on a single GPU.

Methodology

This baseline performs federated LLM fine-tuning with LoRA using the 🤗PEFT library. The clients' models are aggregated with FedAvg strategy. This provides a baseline performance for the leaderboard of Code challenge.

Environments setup

Project dependencies are defined in pyproject.toml. Install them in an activated Python environment with:

pip install -e .

Experimental setup

The dataset is divided into 10 partitions in an IID fashion, a partition is assigned to each ClientApp. We randomly sample a fraction (0.2) of the total nodes to participate in each round, for a total of 200 rounds. All settings are defined in pyproject.toml.

Important

Please note that [tool.flwr.app.config.static] and options.num-supernodes under [tool.flwr.federations.local-simulation] are not allowed to be modified for fair competition if you plan to participated in the LLM leaderboard.

Running the challenge

First make sure that you have got the access to Llama-3.2-3B model with your Hugging-Face account. You can request access directly from the Hugging-Face website. Then, follow the instruction here to log in your account. Note you only need to complete this stage once in your development machine:

huggingface-cli login

Run the challenge with default config values. The configs are defined in [tool.flwr.app.config] entry of pyproject.toml, and are loaded automatically.

flwr run

Model saving

The global PEFT model checkpoints are saved every 5 rounds after aggregation on the sever side as default, which can be specified with train.save-every-round under [tool.flwr.app.config] entry in pyproject.toml.

Checkpoint download

Link