This repository contains Continue.dev configuration files for local and cloud-based AI coding assistants.
- Open VSCode
- Go to Extensions (Cmd+Shift+X on Mac, Ctrl+Shift+X on Windows/Linux)
- Search for "Continue", full extension name is Continue - open-source AI code agent"
- Click Install on the Continue extension
Continue.dev uses a configuration file located at:
- macOS/Linux:
~/.continue/config.yaml - Windows:
%USERPROFILE%\.continue\config.yaml
Copy the desired configuration file from this repository to your Continue.dev config location:
# Example for Mac/Linux
cp mac-mini-m4pro-48GB-config-yaml ~/.continue/config.yamlOllama supports loading GGUF quantized models directly from Hugging Face.
Download and install Ollama from ollama.ai
Ollama can pull models directly using the hf.co/ prefix:
# Example: Pull a quantized model from Hugging Face
ollama pull hf.co/bartowski/FastApply-1.5B-v1.0-GGUF:latest
# Another example
ollama pull hf.co/QuantFactory/Qwen3-Reranker-4B-GGUF:latestAdd the model to your config.yaml:
models:
- name: FastApply 1.5B
provider: ollama
model: hf.co/bartowski/FastApply-1.5B-v1.0-GGUF:latest
roles:
- apply
defaultCompletionOptions:
contextLength: 32768Continue.dev supports several model roles:
- autocomplete: Code completion as you type
- chat: Chat-based interactions
- edit: Code editing suggestions
- apply: Applying code changes
- embed: Creating embeddings for retrieval
- rerank: Reranking search results
- Continue.dev Documentation: https://docs.continue.dev
- Model Configuration Guide: https://docs.continue.dev/customization/models
- Ollama Documentation: https://github.com/ollama/ollama
- Hugging Face Models: https://huggingface.co/models
- GGUF Models Guide: https://huggingface.co/docs/hub/gguf
Feel free to submit configurations for different hardware setups or model combinations!
See LICENSE file for details.