AI Providers

Configure and switch between AI providers in BraceKit.

AI Providers

BraceKit supports multiple AI providers, letting you switch between models instantly without leaving the sidebar. Each provider has its own configuration, and you can use multiple providers simultaneously.

Supported Providers

ProviderTypeModelsSpecial Features
OpenAICloudGPT-5.2, GPT-4.1, o3, o4-miniReasoning models
AnthropicCloudClaude 4.6 (Opus, Sonnet), Haiku 4.5Extended thinking
GeminiCloudGemini 3 Pro, Gemini 2.5 Pro/FlashGoogle Search, Image gen
xAICloudGrok 4.1, Grok 4Image generation
DeepSeekCloudV3.2, R1Reasoning (R1)
OllamaLocalAny modelOffline, Private
CustomAnyAnyMulti-format (OpenAI, Anthropic, Gemini, Ollama)

Quick Setup

Step 1: Open Settings

  1. Click the Settings icon (โš™๏ธ) in the header
  2. Navigate to AI Provider tab

Step 2: Select Provider

Click any provider button in the grid to select it. The configuration fields below update to reflect the selected provider.

Step 3: Enter API Key

Each provider requires an API key (except Ollama):

ProviderGet API Key
OpenAIplatform.openai.com/api-keys
Anthropicconsole.anthropic.com
Geminiaistudio.google.com
xAIconsole.x.ai
DeepSeekplatform.deepseek.com

Step 4: Select Model

Choose a model from the dropdown or type a custom model name.

Step 5: Done

Settings are saved automatically as you type. The provider is now active.

Switching Providers

To switch between configured providers:

  1. Click the provider button in the input toolbar (e.g., โ€œOpenAI โ–พโ€)
  2. Select a different provider from the grid
  3. Choose a model
  4. Continue chatting โ€” context is preserved

Note: Your conversation context carries over when switching providers. The new provider sees the same message history.

Provider Features

Reasoning / Extended Thinking

Some models can show their reasoning process:

ProviderModelsHow to Enable
AnthropicClaude 4.x, Claude 3.5Click brain icon (๐Ÿง )
OpenAIo1, o3, o4-miniAutomatic
Gemini2.5 Pro, Thinking modelsClick brain icon (๐Ÿง )
xAIGrok 4, Grok 4.1 reasoningAutomatic (reasoning models)
DeepSeekR1, ReasonerAutomatic
OllamaWith think modeClick brain icon

Function Calling / Tools

Most models support tool calling for MCP and built-in tools:

ProviderTool Support
OpenAIโœ… Full
Anthropicโœ… Full
Geminiโœ… Full (image models limited)
xAIโœ… Full
DeepSeekโœ… Full
Ollamaโš ๏ธ Limited

Image Generation

Generate images directly in chat:

ProviderModelsAspect Ratios
Geminigemini-2.5-flash-image1:1, 16:9, 9:16, etc.
xAIgrok-imagine-image, grok-2-image-12121:1, 16:9, 9:16, etc.

Vision (Image Input)

Send images for analysis:

ProviderVision Models
OpenAIGPT-5, GPT-4.1, GPT-4o
AnthropicClaude 4.x, Claude 3.5
GeminiAll Gemini models
xAIGrok Vision
Ollamallava, bakllava

API Key Security

Your API keys are:

  • Stored locally in Chromeโ€™s extension storage
  • Never sent to BraceKit servers
  • Only used to authenticate with the AI provider

Multiple API Keys

You can configure multiple providers simultaneously:

  1. Set up OpenAI with your OpenAI key
  2. Set up Anthropic with your Anthropic key
  3. Set up Gemini with your Google key
  4. Switch between them as needed

Each provider stores its own key independently.

Custom Endpoints

For self-hosted or proxy services, add a custom provider:

  1. Click the + Add button in the provider grid
  2. Fill in the Name, Format, and Base URL:
    • OpenAI format โ€” LM Studio, vLLM, OpenRouter, Azure OpenAI
    • Anthropic format โ€” Anthropic-compatible proxies
    • Gemini format โ€” Gemini-compatible proxies
    • Ollama format โ€” Ollama native API
  3. Click Save Provider, then enter your API key in the Configuration section

See the Custom Provider guide for details.

Troubleshooting

โ€œNo models availableโ€

  • Check your API key is valid
  • Verify the API key has the right permissions
  • Try typing a model name manually

โ€œAPI request failedโ€

  • Check your internet connection
  • Verify the API endpoint URL is correct
  • Ensure your API key has sufficient credits

โ€œModel not respondingโ€

  • Some models (o1, o3) take longer to respond
  • Check provider status pages for outages
  • Try a different model

For more help, see the Troubleshooting guide.