Amazon SageMaker

Configure Amazon SageMaker with Continue to use deployed LLM endpoints for both chat and embedding models, supporting LMI and HuggingFace TEI deployments with AWS credentials

SageMaker can be used for both chat and embedding models. Chat models are supported for endpoints deployed with LMI, and embedding models are supported for endpoints deployed with HuggingFace TEI
Here is an example Sagemaker configuration setup:
name: My Config
version: 0.0.1
schema: v1

models:
  - name: deepseek-6.7b-instruct
    provider: sagemaker
    model: lmi-model-deepseek-coder-xxxxxxx
    region: us-west-2
    roles:
      - chat
  - name: mxbai-embed
    provider: sagemaker
    model: mxbai-embed-large-v1-endpoint
    roles:
      - embed
The value in model should be the SageMaker endpoint name you deployed.
Authentication will be through temporary or long-term credentials in ~/.aws/credentials under a profile called "sagemaker".
[sagemaker]
aws_access_key_id = abcdefg
aws_secret_access_key = hijklmno
aws_session_token = pqrstuvwxyz # Optional: means short term creds.