Continue
Continue
DocsBlogSign in
Getting Started
InstallQuick StartCustomization Overview
Features
Customize
Customization OverviewModelsMCP serversRulesPrompts
Model Providers Overview
AnthropicAzure AI FoundryAmazon BedrockGeminiHugging FaceInceptionLM StudioOllamaOpenRouterOpenAITetrate Agent Router ServiceVertex AI
Telemetry
Reference
config.yaml ReferenceMigrating Config to YAMLContinue Documentation MCP Serverconfig.json Reference (Deprecated)Context Providers (Deprecated)@Codebase (Deprecated)@Docs (Deprecated)
Guides
How to Understand Hub vs Local ConfigurationConfiguring Models, Rules, and ToolsCodebase and Documentation AwarenessUsing Plan Mode with ContinueUsing Ollama with Continue: A Developer's GuideUsing Instinct with Ollama in ContinueHow to Run Continue Without InternetHow to Build Custom Code RAGHow to Self-Host a Model
Help
FAQsTroubleshootingDocs Contributions
Continue Hub (deprecated)

LM Studio

Discover LM Studio models here
Get started with LM Studio

Configuration

name: My Config
version: 0.0.1
schema: v1

models:
  - name: <MODEL_NAME>
    provider: lmstudio
    model: <MODEL_ID>
    apiBase: http://<MY_ENDPOINT>/v1 # if running a remote instance of LM Studio
The default apiBase is http://localhost:1234/v1
Check out a more advanced configuration here
InceptionOllama

On this page

Configuration