Skip to content

yashasnadigsyn/shareML

Repository files navigation

ShareML ⚡

Share Machine Learning models as colorful barcodes.

ShareML allows you to compress and encode entire machine learning models into JabCode symbols, colored 2D barcodes that can store much more data than standard QR codes.

Snap a picture, scan it, and load the trained model instantly on another device. No cloud, no internet, just pixels.

ShareML Demo

Features

  • High-Density Storage: Uses multi-colored JabCodes (8 colors) to store ~3x more data than QR codes.
  • Micro-Compression: Custom serialization + zlib compression to fit models into kilobytes.
  • Quantization Support: Choose between float64, float32, or float16 to trade precision for size.
  • Offline: Works completely offline. The model is the image.

Supported Models

  • Linear Models: Logistic Regression, Linear SVM, Ridge, Lasso, SGD, Perceptron
  • Trees: Decision Trees, Random Forests (small)
  • Neural Networks: PyTorch MLPs (nn.Sequential)
  • Bayes: Gaussian & Multinomial Naive Bayes
  • Neighbors: K-Nearest Neighbors
  • Pipelines: sklearn Pipelines with Scalers (Standard/MinMax)

Installation

# Clone the repository
git clone https://github.com/yourusername/shareml.git
cd shareml

# Install with pip (or uv)
pip install .

Important

System Requirements: This package currently relies on jabcodeReader and jabcodeWriter binaries which are compiled for Linux (x86-64). macOS and Windows support requires recompiling the JabCode source.

Quick Start

1. Encode a Model

Train your model normally, then save it as an image.

import shareml
from sklearn.linear_model import LogisticRegression

# Train model
model = LogisticRegression().fit(X, y)

# Save as JabCode image
# Use float16 for maximum compression!
shareml.encode(model, "my_model.png", quantization="float16")

2. Decode a Model

Load the model from the image on any machine with ShareML installed.

import shareml

# Load model from image
model = shareml.decode("my_model.png")

# Use it immediately!
predictions = model.predict(X_test)

How It Works

  1. Serialize: Converts the model's weights and architecture into a compact binary format.
  2. Quantize: (Optional) Reduces float precision to save 50-75% space.
  3. Compress: Applies zlib compression.
  4. Encode: Maps the binary stream to JabCode color symbols.

Limitations

  • Capacity: JabCodes max out at around ~100KB (using multiple symbols). This is perfect for simple classifiers and small neural nets, but won't hold a LLM or ResNet.
  • Platform: Currently Linux-only due to binary dependencies.

⚠️ VIBECODE NOTE

This codebase was vibecoded but with my guidance. I recommend once to clearly understand the code before using it.

License

MIT License. See LICENSE for details.


Powered by JabCode.

About

A library to compress and encode machine learning models into high-density, colorful JabCodes for offline sharing.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages