config.yaml Reference
Comprehensive guide to the config.yaml format used by Continue.dev for building custom coding agents. Learn how to define models, context providers, rules, prompts, and more using YAML configuration.
Introduction
Continue Agents are defined using the
config.yaml specification.Agents are composed of models, rules, and tools (MCP servers).
Configuring Models, Rules, and Tools
Learn how to work with Continue's configuration system, including using hub models, rules, and tools, creating local configurations, and organizing your setup.
Understanding Configs
Learn how to build and configure configs, understand their capabilities, and customize them for your development workflow.
Properties
Below are details for each property that can be set in
config.yaml.All properties at all levels are optional unless explicitly marked as required.
The top-level properties in the
config.yaml configuration file are:name(required)version(required)schema(required)modelscontextrulespromptsdocsmcpServersdata
name
The
name property specifies the name of your project or configuration.name: My Config
version: 1.0.0
schema: v1
version
The
version property specifies the version of your project or configuration.schema
The
schema property specifies the schema version used for the config.yaml, e.g. v1models
The
models section defines the language models used in your configuration. Models are used for functionalities such as chat, editing, and summarizing.Properties:
-
name(required): A unique name to identify the model within your configuration. -
provider(required): The provider of the model (e.g.,openai,ollama). -
model(required): The specific model name (e.g.,gpt-4,starcoder). -
apiBase: Can be used to override the default API base that is specified per model -
roles: An array specifying the roles this model can fulfill, such aschat,autocomplete,embed,rerank,edit,apply,summarize. The default value is[chat, edit, apply, summarize]. Note that thesummarizerole is not currently used. -
capabilities: Array of strings denoting model capabilities, which will overwrite Continue's autodetection based on provider and model. See the Model Capabilities guide for detailed information. Supported capabilities include:tool_use: Enables function/tool calling support (required for Agent mode)image_input: Enables image upload and processing support
Continue automatically detects these capabilities for most models, but you can override this when using custom deployments or if autodetection isn't working correctly. -
maxStopWords: Maximum number of stop words allowed, to avoid API errors with extensive lists. -
promptTemplates: Can be used to override the default prompt templates for different model roles. Valid values arechat,edit,applyandautocomplete. Thechatproperty must be a valid template name, such asllama3oranthropic. -
chatOptions: If the model includes rolechat, these settings apply for Agent and Chat mode:baseSystemMessage: Can be used to override the default system prompt for Chat mode.baseAgentSystemMessage: Can be used to override the default system prompt for Agent mode.basePlanSystemMessage: Can be used to override the default system prompt for Plan mode.
-
embedOptions: If the model includes roleembed, these settings apply for embeddings:maxChunkSize: Maximum tokens per document chunk. Minimum is 128 tokens.maxBatchSize: Maximum number of chunks per request. Minimum is 1 chunk.
-
defaultCompletionOptions: Default completion options for model settings.contextLength: Maximum context length of the model, typically in tokens.maxTokens: Maximum number of tokens to generate in a completion.temperature: Controls the randomness of the completion. Values range from0.0(deterministic) to1.0(random).topP: The cumulative probability for nucleus sampling.topK: Maximum number of tokens considered at each step.stop: An array of stop tokens that will terminate the completion.reasoning: Boolean to enable thinking/reasoning for Anthropic Claude 3.7+ and some Ollama models.reasoningBudgetTokens: Budget tokens for thinking/reasoning in Anthropic Claude 3.7+ models.
-
requestOptions: HTTP request options specific to the model.-
timeout: Timeout for each request to the language model. -
verifySsl: Whether to verify SSL certificates for requests. -
caBundlePath: Path to a custom CA bundle for HTTP requests. -
proxy: Proxy URL for HTTP requests. -
headers: Custom headers for HTTP requests. -
extraBodyProperties: Additional properties to merge with the HTTP request body. -
noProxy: List of hostnames that should bypass the specified proxy. -
clientCertificate: Client certificate for HTTP requests.cert: Path to the client certificate file.key: Path to the client certificate key file.passphrase: Optional passphrase for the client certificate key file.
-
-
autocompleteOptions: If the model includes roleautocomplete, these settings apply for tab autocompletion:disable: Iftrue, disables autocomplete for this model.maxPromptTokens: Maximum number of tokens for the autocomplete prompt.debounceDelay: Delay before triggering autocomplete in milliseconds.modelTimeout: Model timeout for autocomplete requests in milliseconds.maxSuffixPercentage: Maximum percentage of prompt allocated for suffix.prefixPercentage: Percentage of input allocated for prefix.transform: Iffalse, disables trimming of multiline completions. Defaults totrue. Useful for models that generate better multiline completions without transformations.template: Custom template for autocomplete using Mustache syntax. You can use the{{{ prefix }}},{{{ suffix }}},{{{ filename }}},{{{ reponame }}}, and{{{ language }}}variables.onlyMyCode: Only includes code within the repository for context.useCache: Iftrue, enables caching for completions.useImports: Iftrue, includes imports in context.useRecentlyEdited: Iftrue, includes recently edited files in context.useRecentlyOpened: Iftrue, includes recently opened files in context.
Example:
name: My Config
version: 1.0.0
schema: v1
models:
- name: GPT-4o
provider: openai
model: gpt-4o
roles:
- chat
- edit
- apply
defaultCompletionOptions:
temperature: 0.7
maxTokens: 1500
- name: Codestral
provider: mistral
model: codestral-latest
roles:
- autocomplete
autocompleteOptions:
debounceDelay: 250
maxPromptTokens: 1024
onlyMyCode: true
- name: My Model - OpenAI-Compatible
provider: openai
apiBase: http://my-endpoint/v1
model: my-custom-model
capabilities:
- tool_use
- image_input
roles:
- chat
- edit
context
The
context section defines context providers, which supply additional information or context to the language models. Each context provider can be configured with specific parameters.More information about usage/params for each context provider can be found here
Properties:
provider(required): The identifier or name of the context provider (e.g.,code,docs,web)name: Optional name for the providerparams: Optional parameters to configure the context provider's behavior.
Example:
name: My Config
version: 1.0.0
schema: v1
context:
- provider: file
- provider: code
- provider: diff
- provider: http
name: Context Server 1
params:
url: "https://api.example.com/server1"
- provider: terminal
rules
Confiugration example:
name: My Config
version: 1.0.0
schema: v1
rules:
- uses: sanity/sanity-opinionated # rules file stored on Continue Mission Control
- uses: file://user/Desktop/rules.md # rules file stored on local computer
Rules file example:
---
name: Pirate rule
---
Talk like a pirate
See the rules deep dive for more details.
prompts
Prompts can be invoked with a / command.
Configuration example:
name: My Config
version: 1.0.0
schema: v1
prompts:
- uses: supabase/create-functions # prompts file stored on Continue Mission Control
- uses: file://user/Desktop/prompts.md # prompts file stored on local computer
Prompts file example:
---
name: Make pirate comments
invokable: true
---
Rewrite all comments in the active file to talk like a pirate
See the prompts deep dive for more details.
docs
List of documentation sites to index.
Properties:
name(required): Name of the documentation site, displayed in dropdowns, etc.startUrl(required): Start page for crawling - usually root or intro page for docsfavicon: URL for site favicon (default is/favicon.icofromstartUrl).useLocalCrawling: Skip the default crawler and only crawl using a local crawler.
Example:
name: My Config
version: 1.0.0
schema: v1
docs:
- name: Continue
startUrl: https://docs.continue.dev/intro
favicon: https://docs.continue.dev/favicon.ico
mcpServers
The Model Context Protocol is a standard proposed by Anthropic to unify prompts, context, and tool use. Continue supports any MCP server with the MCP context provider.
Properties:
name(required): The name of the MCP server.command(required): The command used to start the server.args: An optional array of arguments for the command.env: An optional map of environment variables for the server process.cwd: An optional working directory to run the command in. Can be absolute or relative path.requestOptions: Optional request options forsseandstreamable-httpservers. Same format as model requestOptions.connectionTimeout: Optional timeout for initial connection to MCP server
Example:
name: My Config
version: 1.0.0
schema: v1
mcpServers:
- name: My MCP Server
command: uvx
args:
- mcp-server-sqlite
- --db-path
- ./test.db
cwd: /Users/NAME/project
env:
NODE_ENV: production
data
Destinations to which development data will be sent.
Properties:
name(required): The display name of the data destinationdestination(required): The destination/endpoint that will receive the data. Can be:- an HTTP endpoint that will receive a POST request with a JSON blob
- a file URL to a directory in which events will be dumpted to
.jsonlfiles
schema(required): the schema version of the JSON blobs to be sent. Options include0.1.0and0.2.0events: an array of event names to include. Defaults to all events if not specified.level: a pre-defined filter for event fields. Options includeallandnoCode; the latter excludes data like file contents, prompts, and completions. Defaults toallapiKey: api key to be sent with request (Bearer header)requestOptions: Options for event POST requests. Same format as model requestOptions.
Example:
name: My Config
version: 1.0.0
schema: v1
data:
- name: Local Data Bank
destination: file:///Users/dallin/Documents/code/continuedev/continue-extras/external-data
schema: 0.2.0
level: all
- name: My Private Company
destination: https://mycompany.com/ingest
schema: 0.2.0
level: noCode
events:
- autocomplete
- chatInteraction
Complete YAML Config Example
Putting it all together, here's a complete example of a
config.yaml configuration file:name: My Config
version: 1.0.0
schema: v1
models:
- uses: anthropic/claude-3.5-sonnet
with:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
override:
defaultCompletionOptions:
temperature: 0.8
- name: GPT-4
provider: openai
model: gpt-4
roles:
- chat
- edit
defaultCompletionOptions:
temperature: 0.5
maxTokens: 2000
requestOptions:
headers:
Authorization: Bearer YOUR_OPENAI_API_KEY
- name: Ollama Starcoder
provider: ollama
model: starcoder
roles:
- autocomplete
autocompleteOptions:
debounceDelay: 350
maxPromptTokens: 1024
onlyMyCode: true
defaultCompletionOptions:
temperature: 0.3
stop:
- "\n"
rules:
- Give concise responses
- Always assume TypeScript rather than JavaScript
prompts:
- name: test
description: Unit test a function
prompt: |
Please write a complete suite of unit tests for this function. You should use the Jest testing framework.
The tests should cover all possible edge cases and should be as thorough as possible.
You should also include a description of each test case.
- uses: myprofile/my-favorite-prompt
context:
- provider: diff
- provider: file
- provider: code
mcpServers:
- name: DevServer
command: npm
args:
- run
- dev
env:
PORT: "3000"
data:
- name: My Private Company
destination: https://mycompany.com/ingest
schema: 0.2.0
level: noCode
events:
- autocomplete
- chatInteraction
Using YAML Anchors to Avoid Config Duplication
You can also use node anchors to avoid duplication of properties. To do so, adding the YAML version header
%YAML 1.1 is needed, here's an example of a config.yaml configuration file using anchors:%YAML 1.1
---
name: My Config
version: 1.0.0
schema: v1
model_defaults: &model_defaults
provider: openai
apiKey: my-api-key
apiBase: https://api.example.com/llm
models:
- name: mistral
<<: *model_defaults
model: mistral-7b-instruct
roles:
- chat
- edit
- name: qwen2.5-coder-7b-instruct
<<: *model_defaults
model: qwen2.5-coder-7b-instruct
roles:
- chat
- edit
- name: qwen2.5-coder-7b
<<: *model_defaults
model: qwen2.5-coder-7b
useLegacyCompletionsEndpoint: false
roles:
- autocomplete
autocompleteOptions:
debounceDelay: 350
maxPromptTokens: 1024
onlyMyCode: true
config.json Deprecation
config.yaml replaces config.json, which is deprecated. View the Migration Guide for help transitioning from the old format.