Popular repositories Loading
-
bigdl-llm-tutorial
bigdl-llm-tutorial PublicForked from intel/ipex-llm-tutorial
Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using bigdl-llm
Jupyter Notebook
-
FastChat
FastChat PublicForked from intel-staging/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Python
-
text-generation-webui
text-generation-webui PublicForked from intel-staging/text-generation-webui
A Gradio Web UI for running local LLM on Intel GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) using IPEX-LLM.
Python
-
Langchain-Chatchat
Langchain-Chatchat PublicForked from intel-staging/Langchain-Chatchat
Knowledge Base QA using RAG pipeline on Intel GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with IPEX-LLM
Python
-
bigdl-project.github.io
bigdl-project.github.io PublicForked from bigdl-project/bigdl-project.github.io
HTML
-
ipex-llm
ipex-llm PublicForked from intel/ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max)…
Python
If the problem persists, check the GitHub status page or contact support.
