Skip to content

We all want to have a local llm, so here is one for you to own in a PDF.

License

Notifications You must be signed in to change notification settings

OpenKnots/llm-pdf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Run LLMs inside a PDF file.

Screenshots

What is llm-pdf?

  • This is a proof-of-concept project, showing that it's possible to run an entire Large Language Model in nothing but a PDF file.
  • It uses Emscripten to compile llama.cpp into asm.js, which can then be run in the PDF using an old PDF JS injection.
  • Combined with embedding the entire LLM file into the PDF with base64, we are able to run LLM inference in nothing but a PDF.

Load Custom Model in the PDF

The scripts/generate.py file will help you create a PDF with any compatible LLM.

The easiest way to get started is with the following command:

cd scripts
python3 generate.py --model "path/for/model.gguf" --output "path/to/output.pdf"

Choosing Models

Here's the general guidelines when picking a model:

  • Only GGUF quantized models work.
  • Generally, try to use Q8 quantized models, as those run the fastest.
  • For reference, 135M parameter models take around 5s per token input/output. Anything higher will likely be unreasonably slow.

Inspiration & Credits

Thank you to the following for inspiration and reference:

Thanks to the following for creating the tiny LLMs that power llm-pdf:

About

We all want to have a local llm, so here is one for you to own in a PDF.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published