Skip to content

Comparing local Visual Language Models on classification tasks

Notifications You must be signed in to change notification settings

amarzullo24/LLM_classifier

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

LLM_classifier

Comparing local Visual Language Models on classification tasks

For the following demo pipeline you need to provide a sample dataset (each folder is a class containing images from that class). E.g. download the MNIST dataset using:

python download_mnist.py

Demo pipeline:

  1. On Linux, Download ollama (llm server) as:
  !curl https://ollama.ai/install.sh | sh
  !ollama serve &
  1. Pull a vision-language model (e.g. minicpm-v):
!ollama pull minicpm-v
  1. run the python script:
!python main.py --model='minicpm-v' --dataset='mini_mnist/mnist_png/testing'

Alternatively, provide the url to an image for single image classification.

About

Comparing local Visual Language Models on classification tasks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages