Skip to content

timmyy123/LLM-Hub

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

450 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LLM Hub πŸ€–

LLM Hub is an open-source Android app for on-device LLM chat and image generation. It's optimized for mobile usage (CPU/GPU/NPU acceleration) and supports multiple model formats so you can run powerful models locally and privately.

🍎 iOS version is coming! I'm actively working on it and would love to bring LLM Hub to iPhone and iPad. The only thing standing in the way is hardware β€” I need to save up ~$1,000 for a Mac and an Apple Developer account.

If you've enjoyed the app and want to help make iOS happen, even a small amount goes a long way πŸ’™

Sponsor timmyy123

Early sponsor perk: The iOS app will be a paid App Store release, but anyone who sponsors before launch gets a free redemption code on me. Just include your GitHub username or email in your sponsor note and I'll make sure you're taken care of when it ships!

Download

Get it on Google Play

πŸ“Έ Screenshots

AI Features Chat Interface Chat Interface 2

πŸš€ Features

πŸ› οΈ AI Tools Suite

Tool Description
πŸ’¬ Chat Multi-turn conversations with RAG memory, web search, TTS auto-readout, and multimodal input
πŸ€– creAItor [NEW] Design custom AI personas with specialized system prompts (PCTF) in seconds
πŸ’» Vibe Coder [NEW] Explain your app idea and watch it be built in real-time with live HTML/JS preview
✍️ Writing Aid Summarize, expand, rewrite, improve grammar, or generate code from descriptions
🎨 Image Generator Create images from text prompts using Stable Diffusion 1.5 with swipeable gallery
🌍 Translator Translate text, images (OCR), and audio across 50+ languages - offline
πŸŽ™οΈ Transcriber Convert speech to text with on-device processing
πŸ›‘οΈ Scam Detector Analyze messages and images for phishing with risk assessment

✨ Vibes & Creators

  • Vibes: A full on-device coding environment. The LLM writes HTML/JS/CSS based on your requirements, and you can preview/run the app instantly in a secure sandbox.
  • creAItor: Powerful persona generation to create anything from characters with fun personalities to system architects. Just describe a creAItor ("respond like a pirate" or "respond with a markdown spec for a code agent to generate a full-stack system"), and the on-device LLM generates a complex system prompt (PCTF format) that you can use in chat.

Kid Mode

Activate this in Settings under Kid Mode. Set a PIN, and the mode will remain in effect until you unlock it with the same PIN.

  • Safe Exploration: Families can introduce children to private local AI with confidence.
  • Model-Level Guardrails: Automatically activates strict safety protocols at the model inference layer across all tools (Chat, Writing Aid, Image Gen, etc.).
  • Peace of Mind: Ensures all generated content is age-appropriate without compromising the privacy benefits of on-device processing.

πŸ” Privacy First

  • 100% on-device processing - no internet required for inference
  • Zero data collection - conversations never leave your device
  • No accounts, no tracking - completely private
  • Open-source - fully transparent

⚑ Advanced Capabilities

  • GPU/NPU acceleration for fast performance
  • Text-to-Speech with auto-readout
  • RAG with global memory for enhanced responses
  • Import custom models (.task, .litertlm, qnn,.mnn, .gguf)
  • Direct downloads from HuggingFace
  • 16 language interfaces

Quick Start

  1. Download from Google Play or build from source
  2. Open Settings β†’ Download Models β†’ Download or Import a model
  3. Select a model and start chatting or generating images

Supported Model Families (summary)

  • Gemma (LiteRT Task)
  • Llama (Task + GGUF variants)
  • Phi (LiteRT LM)
  • LiquidAI LFM (LFM 2.5 1.2B + LFM VL 1.6B vision-enabled)
  • Ministral / Mistral family (GGUF / ONNX)
  • IBM Granite (GGUF)

Model Formats

  • Task / LiteRT (.task): MediaPipe/LiteRT optimized models (GPU/NPU capable)
  • LiteRT LM (.litertlm): LiteRT language models
  • GGUF (.gguf): Quantized models β€” CPU inference powered by Nexa SDK; some vision-capable GGUF models require an additional mmproj vision project file
  • ONNX (.onnx): Cross-platform model runtime

GGUF Compatibility Notes

  • Not all Android devices can load GGUF models in this app.
  • GGUF loading/runtime depends on Nexa SDK native libraries and device/ABI support; on unsupported devices, GGUF model loading can fail even if the model file is valid.
  • In this app, the GGUF NPU option is intentionally shown only for Snapdragon 8 Gen 4-class devices.

Importing models

  • Settings β†’ Download Models β†’ Import Model β†’ choose .task, .litertlm, .qnn, .gguf, or .mnn
  • The full model list and download links live in app/src/.../data/ModelData.kt (do not exhaustively list variants in the README)

Technology

  • Kotlin + Jetpack Compose (Material 3)
  • LLM Runtime: MediaPipe, LiteRT, Nexa SDK
  • Image Gen: MNN / Qualcomm QNN
  • Quantization: INT4/INT8

Acknowledgments

  • Nexa SDK β€” GGUF model inference support (credit shown in-app About) ⚑
  • Google, Meta, Microsoft, IBM, LiquidAI, Mistral, HuggingFace β€” model and tooling contributions

Development Setup

Android local development (Android Studio + Gradle)

git clone https://github.com/timmyy123/LLM-Hub.git
cd LLM-Hub/android
./gradlew assembleDebug
./gradlew installDebug

Android-only local configuration

Setting up Hugging Face Token for Development

To use private or gated models, add your HuggingFace token to android/local.properties (do NOT commit this file):

HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Save and sync Gradle in Android Studio; the app will read BuildConfig.HF_TOKEN at build time.

Dev Premium Flag

To skip ads and unlock all premium features locally without a real IAP purchase, add this to android/local.properties:

DEBUG_PREMIUM=true

Set it back to false before making a production build.

Model License Acceptance

Some models on HuggingFace (especially from Google and Meta) require explicit license acceptance before downloading. When building the app locally:

  1. Ensure you have a valid HuggingFace read token in local.properties (see above)
  2. For each model you want to download:

Note: This is only required for local development builds. The Play Store version uses different authentication and does not require manual license acceptance for each model.

iOS local development (macOS + Xcode)

Prerequisites

  • macOS with Xcode installed (use a version that matches your iOS device version)
  • An Apple ID signed into Xcode (free Personal Team works for local device testing)
  • iPhone with Developer Mode enabled if you run on real hardware

Build and run on iPhone

  1. Clone the repo and open the iOS project:
git clone https://github.com/timmyy123/LLM-Hub.git
cd LLM-Hub
open ios/LLMHub/LLMHub.xcodeproj
  1. In Xcode, select target LLMHub β†’ Signing & Capabilities:
    • Set your Team
    • Set a unique Bundle Identifier (for example: com.yourname.llmhub)
    • Keep Automatically manage signing enabled
  2. Select your iPhone as the run destination and press Run.

If you use Xcode beta

If your phone is on a newer iOS build and requires Xcode beta support, switch CLI tools:

sudo xcode-select -s /Applications/Xcode-beta.app/Contents/Developer
xcodebuild -version

Useful iOS dev troubleshooting

  • If signing fails, re-check Team + Bundle Identifier in target settings.
  • If build cache acts stale, clean DerivedData:
rm -rf ~/Library/Developer/Xcode/DerivedData/LLMHub-*
  • Build logs: Report Navigator (Cmd+9)
  • Runtime logs: Debug Console (Cmd+Shift+Y)

Contributing

  • Fork β†’ branch β†’ PR. See CONTRIBUTING.md (or open an issue/discussion if unsure).

License

  • Source code is licensed under the PolyForm Noncommercial License 1.0.0.
  • You are free to use, study, and build on this project for non-commercial purposes.
  • Commercial use β€” including distributing the app, charging for it, or monetizing it with ads or IAP β€” is not permitted without explicit written permission from the author.
  • Contact timmyboy0623@gmail.com for commercial licensing enquiries.

Support

Notes

  • This README is intentionally concise β€” consult ModelData.kt for exact model variants, sizes, and format details.

Star History

Star History Chart