Model Providers Overview
Continue supports a wide range of AI model providers to power different features like chat, code editing, autocompletion, and embeddings. This overview helps you navigate through the available options and find the right provider for your needs.
Popular Model Providers
These are the most commonly used model providers that offer a wide range of capabilities:
| Provider | Description | Capabilities |
|---|---|---|
| Anthropic | Providers of Claude models, known for long context windows and strong reasoning | Chat, Edit, Apply, Embeddings |
| OpenAI | Creators of GPT models with strong coding capabilities | Chat, Edit, Apply, Embeddings |
| Azure | Microsoft's cloud platform offering OpenAI models | Chat, Edit, Apply, Embeddings |
| Amazon Bedrock | AWS service offering access to various foundation models | Chat, Edit, Apply, Embeddings |
| Ollama | Run open-source models locally with a simple interface | Chat, Edit, Apply, Embeddings, Autocomplete |
| Google Gemini | Google's multimodal AI models | Chat, Edit, Apply, Embeddings |
| DeepSeek | Specialized code models with strong performance | Chat, Edit, Apply |
| Mistral | High-performance open models with commercial offerings | Chat, Edit, Apply, Embeddings |
| xAI | Grok models from xAI | Chat, Edit, Apply |
| Vertex AI | Google Cloud's machine learning platform | Chat, Edit, Apply, Embeddings |
| Inception | On-premises open-source model runners | Chat, Edit, Apply |
| HuggingFace | Platform for open source models with inference providers and endpoints | Chat, Edit, Apply, Embeddings |
Additional Model Providers
Beyond the top-level providers, Continue supports many other options:
Hosted Services
| Provider | Description |
|---|---|
| Groq | Ultra-fast inference for various open models |
| Together AI | Platform for running a variety of open models |
| DeepInfra | Hosting for various open source models |
| OpenRouter | Gateway to multiple model providers |
| Tetrate Agent Router Service | Gateway with intelligent routing across multiple model providers |
| Cohere | Models specialized for semantic search and text generation |
| NVIDIA | GPU-accelerated model hosting |
| Cloudflare | Edge-based AI inference services |
Local Model Options
| Provider | Description |
|---|---|
| LM Studio | Desktop app for running models locally |
| llama.cpp | Optimized C++ implementation for running LLMs |
| LlamaStack | Stack for running Llama models locally |
| llamafile | Self-contained executable model files |
Enterprise Solutions
| Provider | Description |
|---|---|
| SambaNova | Enterprise AI platform |
| Watson x | IBM's enterprise AI platform |
| Sagemaker | AWS machine learning platform |
| Nebius | Cloud-based machine learning platform |
How to Choose a Model Provider
When selecting a model provider, consider:
- Hosting preference: Do you need local models for offline use or privacy, or are you comfortable with cloud services?
- Performance requirements: Different providers offer varying levels of speed, quality, and context length.
- Specific capabilities: Some models excel at code generation, others at embeddings or reasoning tasks.
- Pricing: Costs vary significantly between providers, from free local options to premium cloud services.
- API key requirements: Most cloud providers require API keys that you'll need to configure.
Configuration Format
You can add models to your
config.yaml file like this:models:
- name: Claude 4 Sonnet
provider: anthropic # Choose provider from the lists above
model: claude-sonnet-4-20250514 # Specific model name
apiKey: ${{ secrets.OPENAI_API_KEY }}
roles:
- chat
- edit
- apply
For more detailed configuration, visit the specific provider pages linked above.
Change Your Model Provider
Continue allows you to choose your favorite or even add multiple model providers. This allows you to use different models for different tasks, or to try another model if you’re not happy with the results from your current model. Continue supports all of the popular model providers, including OpenAI, Anthropic, Microsoft/Azure, Mistral, and more. You can even self host your own model provider if you’d like. Learn more about model providers.