Skip to content

Multi-model LLM interface integrating Open WebUI with local and API-hosted models. Llama | DeepSeek | LiteLLM

Notifications You must be signed in to change notification settings

obj809/multi-llm-ui

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Multi LLM UI

Deployment Link

Live Deployment

Create an Account

To use the hosted models, open the app and select Sign up on the login page.

Note: The signup email does not need to be a real email address.

After creating your account, sign in and you can access the available OpenWebUI and LiteLLM-backed models.

Summary

This project runs a server that hosts several AI tools running inside Docker containers.

OpenWebUI provides a chat interface, where various LLMs can be hosted.

Ollama is bundled into the OpenWebUI container, which then runs a variety of local models within that container (Llama, DeepSeek, Gemma, CodeLlama, TinyLlama).

LiteLLM is also attached to the OpenWebUI interface, providing access to a variety of cloud-based models (Gemini, Claude, etc.).

A primary objective was to provide a modern interface for future LLM experimentation.

Screenshot

Project Screenshot

Models Hosted

Project Models:

  • [Pending...]

Ollama Based:

  • Llama 3.1 (latest), TinyLLaMA 1.1B, Gemma 3 4B, DeepSeek R1 8B

References

Ollama Model Library

Open WebUI on GitHub

LiteLLM on GitHub

About

Multi-model LLM interface integrating Open WebUI with local and API-hosted models. Llama | DeepSeek | LiteLLM

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages