Bamboo-7B Large Language Model
-
Updated
Mar 28, 2024
Bamboo-7B Large Language Model
Run large Mixture-of-Experts LLMs that exceed system RAM on Apple Silicon by loading only router-selected experts from SSD with MLX. Includes OpenAI/Anthropic-compatible serving for local agentic coding.
Add a description, image, and links to the sparse-llm topic page so that developers can more easily learn about it.
To associate your repository with the sparse-llm topic, visit your repo's landing page and select "manage topics."