AI Providers
Configure DeepSeek models in BraceKit.
DeepSeek
DeepSeek offers powerful models at competitive prices, with V3.2 powering both chat and reasoning modes.
Setup
1. Get an API Key
- Go to platform.deepseek.com
- Sign in or create an account
- Navigate to API Keys
- Create a new key
2. Configure in BraceKit
- Open Settings โ AI Provider
- Click DeepSeek in the provider grid
- Paste your API key
- Select a model
Settings are saved automatically as you type.
Available Models
Both models are powered by DeepSeek-V3.2 with different modes:
| Model | Mode | Best For | Context | Max Output |
|---|---|---|---|---|
| deepseek-chat | Non-thinking | General chat, code, summarization | 128K | 8K tokens |
| deepseek-reasoner | Thinking | Math, logic, complex analysis | 128K | 64K tokens |
Features
Reasoning (Thinking Mode)
The deepseek-reasoner model shows its Chain-of-Thought reasoning process:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ๐ง Thinking... โพ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Let me analyze this problem... โ
โ โ
โ Step 1: Identify the key variables โ
โ Step 2: Consider edge cases โ
โ Step 3: Formulate solution โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Based on my analysis...
This happens automatically with the reasoner model.
Function Calling
DeepSeek supports tool calling for:
- MCP server tools
- Built-in tools (Google Search)
Cost-Effective
DeepSeek offers very competitive pricing while maintaining high quality.
Model Parameters
Configure in Settings โ AI Provider under the Advanced section:
| Parameter | Range | Effect |
|---|---|---|
| Temperature | 0-2 | Higher = more creative |
| Max Tokens | 1-8K (chat) / 1-64K (reasoner) | Maximum response length |
Recommended Settings
| Use Case | Model | Temperature |
|---|---|---|
| Code generation | deepseek-chat | 0.3 |
| General chat | deepseek-chat | 0.7 |
| Complex reasoning | deepseek-reasoner | 0.5 |
| Math/Logic | deepseek-reasoner | 0.0 |
Pricing
DeepSeek V3.2 offers unified pricing for both models with automatic context caching:
| Type | Price (per 1M tokens) |
|---|---|
| Input (Cache Hit) | $0.028 |
| Input (Cache Miss) | $0.28 |
| Output | $0.42 |
Cache Benefits:
- Automatic context caching (enabled by default)
- 90% discount on cached input tokens
- Shared prefix across requests triggers caching
Note: Check DeepSeek pricing for current rates.
Troubleshooting
โRate limit exceededโ
- Wait a moment and retry
- Check your usage limits in the console
Reasoning not showing
- Ensure youโre using
deepseek-reasoner(notdeepseek-chat) - The thinking mode is optimized for complex queries (math, logic, code)
- Simple queries may not trigger extended Chain-of-Thought
Slow responses
- The reasoner model takes longer to โthinkโ
- For faster responses, use
deepseek-chat