Skip to content
View Mingyu-Wei's full-sized avatar

Block or report Mingyu-Wei

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Popular repositories Loading

  1. bigdl-llm-tutorial bigdl-llm-tutorial Public

    Forked from intel/ipex-llm-tutorial

    Accelerate LLM with low-bit (FP4 / INT4 / FP8 / INT8) optimizations using bigdl-llm

    Jupyter Notebook

  2. FastChat FastChat Public

    Forked from intel-staging/FastChat

    An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

    Python

  3. text-generation-webui text-generation-webui Public

    Forked from intel-staging/text-generation-webui

    A Gradio Web UI for running local LLM on Intel GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) using IPEX-LLM.

    Python

  4. Langchain-Chatchat Langchain-Chatchat Public

    Forked from intel-staging/Langchain-Chatchat

    Knowledge Base QA using RAG pipeline on Intel GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with IPEX-LLM

    Python

  5. bigdl-project.github.io bigdl-project.github.io Public

    Forked from bigdl-project/bigdl-project.github.io

    HTML

  6. ipex-llm ipex-llm Public

    Forked from intel/ipex-llm

    Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max)…

    Python