Llamafile

Configure Llamafile with Continue to use self-contained binary files that can run open-source language models like Mistral without additional setup

A llamafile is a self-contained binary that can run an open-source LLM. You can configure this provider in your config.json as follows:
name: My Config
version: 0.0.1
schema: v1

models:
  - name: Llamafile
    provider: llamafile
    model: mistral-7b