AI Providers
Custom Provider
Connect to any AI API endpoint with multiple format support.
Custom Provider
Connect BraceKit to any AI API endpoint with support for multiple formats: OpenAI, Anthropic, Gemini, and Ollama. Perfect for self-hosted models, proxies, or alternative services.
Use Cases
- Local models: LM Studio, Jan.ai, LocalAI
- Proxies: OpenRouter, Azure OpenAI
- Self-hosted: vLLM, TGI, text-generation-webui
- Alternative services: Any OpenAI-compatible API
Setup
1. Open Settings
- Click Settings (โ๏ธ)
- Go to AI Provider tab
2. Add Custom Provider
Click the + Add button in the provider grid and fill in:
| Field | Description | Example |
|---|---|---|
| Name | Display name | โLM Studioโ |
| Format | API format | OpenAI, Anthropic, Gemini, Ollama |
| Base URL | API endpoint | http://localhost:1234/v1 |
Click Save Provider to create it.
3. Configure and Use
- The new provider appears in the provider grid โ click to select it
- Enter your API Key in the Configuration section (or leave empty for local servers)
- Add or select a Model in the Configuration section
API Formats
BraceKit supports multiple API formats:
OpenAI Format
The most common format, used by:
- LM Studio
- Jan.ai
- LocalAI
- vLLM
- OpenRouter
- Azure OpenAI
Base URL: http://localhost:1234/v1
Endpoint: /chat/completionsAnthropic Format
For Anthropic-compatible endpoints:
Base URL: https://your-anthropic-proxy.com
Format: AnthropicGemini Format
For Gemini-compatible endpoints:
Base URL: https://your-gemini-proxy.com
Format: GeminiOllama Format
For Ollama native API:
Base URL: http://localhost:11434
Format: Ollama
Endpoint: /api/chatCommon Configurations
LM Studio
- Open LM Studio
- Start a local server (port 1234)
- Configure in BraceKit:
Name: LM Studio
Base URL: http://localhost:1234/v1
API Key: none
Format: OpenAIJan.ai
- Open Jan
- Enable local server in settings
- Configure in BraceKit:
Name: Jan
Base URL: http://localhost:1337/v1
API Key: none
Format: OpenAILocalAI
Name: LocalAI
Base URL: http://localhost:8080/v1
API Key: none
Format: OpenAIvLLM
Name: vLLM
Base URL: http://localhost:8000/v1
API Key: none
Format: OpenAIOpenRouter
Name: OpenRouter
Base URL: https://openrouter.ai/api/v1
API Key: your-openrouter-key
Format: OpenAIAzure OpenAI
Name: Azure OpenAI
Base URL: https://your-resource.openai.azure.com/openai/deployments/your-deployment
API Key: your-azure-key
Format: OpenAIModel Fetching
BraceKit attempts to fetch available models from the /models endpoint.
If Fetching Works
Models appear automatically in the dropdown.
If Fetching Fails
- Type the model name manually
- Check the API documentation for available models
- Verify the endpoint is correct
Troubleshooting
โConnection refusedโ
- Ensure the server is running
- Check the port is correct
- Verify no firewall is blocking
โ401 Unauthorizedโ
- Add the correct API key
- Some services need any non-empty string
โModel not foundโ
- Type the model name manually
- Check the serviceโs model list
- Verify model name spelling
โCORS errorโ
- The server may need CORS headers
- Configure the server to allow browser requests
- Or use a browser extension to bypass (development only)
Streaming not working
- Ensure the endpoint supports SSE
- Check the response format matches the selected format
- Some local servers donโt support streaming
Security Notes
Local Servers
For local development servers:
- No API key needed (or any placeholder)
- Only accessible from your machine
- Safe to use without authentication
Remote Servers
For remote or public servers:
- Always use a real API key
- Ensure HTTPS is enabled
- Consider rate limiting
Related
- Ollama โ Recommended for local models
- Configuration โ All settings