Skip to content

TahaGPT/Ray

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

⚡ Distributed Model Training with Ray

📌 Overview

This repository explores research-based optimization techniques for model training using the Ray library in Python.
The primary goal is to make model training easier, faster, and more scalable by leveraging Ray’s distributed computing capabilities.

Instead of relying on a single core, the workload is divided across multiple cores (and nodes, if available), leading to significant improvements in efficiency.


🧠 Core Idea

  1. Traditional model training → Runs sequentially or with limited parallelization.
  2. Ray → Distributes training tasks across multiple CPU cores (or GPUs).
  3. Optimized training → Faster experiments, reduced bottlenecks, and efficient resource utilization.
  4. Research goal → Evaluate and improve current techniques using Ray’s ecosystem.

🔍 Features

  • Parallel & distributed execution with Ray
  • 🖥️ Multi-core utilization for faster training
  • 📊 Scalable experiments with large datasets and models
  • 🔬 Research-focused: compares baseline vs optimized training approaches
  • 🔄 Easily extendable for different ML/DL frameworks

⚙️ Installation

Clone the repository:

git clone https://github.com/TahaGPT/Ray.git
cd Ray

ray start --head

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors