Skip to content

pranudeepmetuku10/CustomLLMFineTuning

 
 

Repository files navigation

Custom LLM Fine-Tuning Platform

An end-to-end platform for fine-tuning and serving large language models (LLMs) on domain-specific datasets. Built with LoRA/PEFT, distributed training, and production-grade deployment.


Recent Work

Data Pipeline Implementation → For detailed information checkout Data-Pipeline README

Key Features

Fine-Tuning → Parameter-efficient training (LoRA/QLoRA) on user datasets.

Scalability → Distributed systems for fast training.

Experiment Tracking → MLflow integration with auto-generated model cards.

Serving → FastAPI + Docker/Kubernetes deployment with GPU batching.

Monitoring → Drift detection, performance dashboards, and feedback loops.


How It Works

Upload dataset → preprocessing & validation.

Fine-tune base model (StarCoderBase, LLaMA, Falcon).

Track experiments and metrics.

Deploy model as an API endpoint.

Monitor → retrain with new data.


Example Use Cases

FinTech copilots trained on regulatory codebases.

Healthcare assistants fine-tuned on medical knowledge.

Enterprise AI copilots for private code repositories.

About

My branch for MLOps project

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.5%
  • Jupyter Notebook 1.9%
  • PowerShell 0.3%
  • Shell 0.2%
  • Batchfile 0.1%
  • Dockerfile 0.0%