Skip to content

Abdul-Omira/model-forge-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

7 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Model Forge CLI πŸš€

npm version License: MIT Build Status Coverage Status Downloads GitHub stars

A powerful CLI tool for managing and running AI models locally with ease. Download, manage, serve, and benchmark AI models from Hugging Face and other sources with a single command.

✨ Features

  • πŸ“¦ Easy Model Management - Download and manage AI models with simple commands
  • πŸš€ Multiple Format Support - GGUF, SafeTensors, ONNX, PyTorch, and more
  • 🌐 Local Model Server - Serve models via REST API instantly
  • πŸ“Š Benchmarking - Test model performance and resource usage
  • 🎯 Interactive Mode - User-friendly interactive selection
  • πŸ“ˆ Progress Tracking - Beautiful progress bars for downloads
  • πŸ”§ Configurable - Customize settings via YAML configuration
  • πŸ€– Pre-configured Models - Quick access to popular models like Llama 2, Mistral, Phi-2, and more

πŸ“‹ Prerequisites

  • Node.js 18.0.0 or higher
  • npm or yarn
  • 4GB+ RAM recommended
  • Sufficient disk space for models (varies by model size)

πŸš€ Quick Start

Install globally via npm:

npm install -g model-forge-cli

Or clone and build from source:

git clone https://github.com/abdul-omira/model-forge-cli.git
cd model-forge-cli
npm install
npm run build
npm link

πŸ“– Usage

List Available Models

# Show all available models from the registry
model-forge list --remote

# Show only installed models
model-forge list --installed

Install a Model

# Install a specific model
model-forge install llama2-7b

# Force reinstall
model-forge install mistral-7b --force

Serve a Model

# Start a local server for your model
model-forge serve phi-2

# Custom port and host
model-forge serve codellama-7b --port 3000 --host 0.0.0.0

Remove a Model

model-forge remove llama2-7b

Run Benchmarks

model-forge benchmark phi-2

Interactive Mode

model-forge interactive

Configuration

# View current configuration
model-forge config --list

# Set a configuration value
model-forge config --set defaultModel=mistral-7b

# Get a specific configuration
model-forge config --get serverPort

πŸ€– Available Models

Model Size Type Description
llama2-7b 3.8 GB GGUF Meta Llama 2 7B parameter model
mistral-7b 4.1 GB GGUF Mistral 7B Instruct v0.2
phi-2 1.7 GB GGUF Microsoft Phi-2 2.7B model
codellama-7b 3.8 GB GGUF Code Llama 7B for code generation
gemma-2b 1.4 GB GGUF Google Gemma 2B model
stable-diffusion-v1-5 4.2 GB SafeTensors Image generation model
whisper-small 466 MB ONNX Speech recognition model
bert-base 420 MB ONNX Text classification model

🌐 API Endpoints

When serving a model, the following endpoints are available:

Endpoint Method Description
/health GET Server health check
/models GET List all installed models
/model/info GET Get current model information
/predict POST Make a prediction
/generate POST Generate text (supports streaming)

Example API Usage

# Health check
curl http://localhost:8080/health

# Make a prediction
curl -X POST http://localhost:8080/predict \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Hello, world!", "max_tokens": 100}'

# Generate with streaming
curl -X POST http://localhost:8080/generate \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Tell me a story", "stream": true}'

πŸ”§ Configuration File

Configuration is stored in ~/.model-forge/config.yaml:

defaultModel: phi-2
modelsDirectory: ~/.model-forge/models
serverPort: 8080
serverHost: localhost
autoUpdate: true
telemetry: false
maxConcurrentDownloads: 3
cacheDirectory: ~/.model-forge/cache

πŸ§ͺ Development

# Install dependencies
npm install

# Run in development mode
npm run dev

# Run tests
npm test

# Run tests with coverage
npm run test:coverage

# Lint code
npm run lint

# Format code
npm run format

# Build for production
npm run build

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ’– Support

If you find this project useful, please consider:

  • ⭐ Giving it a star on GitHub
  • πŸ› Reporting bugs or requesting features via Issues
  • πŸ’° Sponsoring the development
  • πŸ“’ Sharing it with others who might find it useful

πŸ™ Acknowledgments

  • Thanks to the Hugging Face team for hosting amazing models
  • The open-source AI community for creating incredible models
  • All contributors who help improve this tool

πŸ“§ Contact

Abdul Omira - @abdul_omira

Project Link: https://github.com/abdul-omira/model-forge-cli


Made with ❀️ by Abdul Omira

About

A powerful CLI tool for managing and running AI models locally with ease

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

  •  

Packages

 
 
 

Contributors