How to Configure OpenAI Models with Continue
Discover OpenAI models here
Get an API key from the OpenAI Console
Configuration
name: My Config
version: 0.0.1
schema: v1
models:
- name: <MODEL_NAME>
provider: openai
model: <MODEL_ID>
apiKey: <YOUR_OPENAI_API_KEY>
Check out a more advanced configuration here
OpenAI API compatible providers
OpenAI API compatible providers include
- KoboldCpp
- text-gen-webui
- FastChat
- LocalAI
- llama-cpp-python
- TensorRT-LLM
- vLLM
- BerriAI/litellm
- Tetrate Agent Router Service
If you are using an OpenAI API compatible providers, you can change the
apiBase like this:name: My Config
version: 0.0.1
schema: v1
models:
- name: <OPENAI_API_COMPATIBLE_PROVIDER_MODEL>
provider: openai
model: <MODEL_NAME>
apiBase: http://localhost:8000/v1
apiKey: <YOUR_CUSTOM_API_KEY>
How to Force Legacy Completions Endpoint Usage
To force usage of
completions instead of chat/completions endpoint you can set:name: My Config
version: 0.0.1
schema: v1
models:
- name: <OPENAI_API_COMPATIBLE_PROVIDER_MODEL>
provider: openai
model: <MODEL_NAME>>
apiBase: http://localhost:8000/v1
useLegacyCompletionsEndpoint: true