DeepSeek

Configure DeepSeek models in BraceKit.

DeepSeek

DeepSeek offers powerful models at competitive prices, with V3.2 powering both chat and reasoning modes.

Setup

1. Get an API Key

  1. Go to platform.deepseek.com
  2. Sign in or create an account
  3. Navigate to API Keys
  4. Create a new key

2. Configure in BraceKit

  1. Open Settings โ†’ AI Provider
  2. Click DeepSeek in the provider grid
  3. Paste your API key
  4. Select a model

Settings are saved automatically as you type.

Available Models

Both models are powered by DeepSeek-V3.2 with different modes:

ModelModeBest ForContextMax Output
deepseek-chatNon-thinkingGeneral chat, code, summarization128K8K tokens
deepseek-reasonerThinkingMath, logic, complex analysis128K64K tokens

Features

Reasoning (Thinking Mode)

The deepseek-reasoner model shows its Chain-of-Thought reasoning process:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ ๐Ÿง  Thinking...                    โ–พ โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ Let me analyze this problem...      โ”‚
โ”‚                                     โ”‚
โ”‚ Step 1: Identify the key variables  โ”‚
โ”‚ Step 2: Consider edge cases         โ”‚
โ”‚ Step 3: Formulate solution          โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Based on my analysis...

This happens automatically with the reasoner model.

Function Calling

DeepSeek supports tool calling for:

  • MCP server tools
  • Built-in tools (Google Search)

Cost-Effective

DeepSeek offers very competitive pricing while maintaining high quality.

Model Parameters

Configure in Settings โ†’ AI Provider under the Advanced section:

ParameterRangeEffect
Temperature0-2Higher = more creative
Max Tokens1-8K (chat) / 1-64K (reasoner)Maximum response length
Use CaseModelTemperature
Code generationdeepseek-chat0.3
General chatdeepseek-chat0.7
Complex reasoningdeepseek-reasoner0.5
Math/Logicdeepseek-reasoner0.0

Pricing

DeepSeek V3.2 offers unified pricing for both models with automatic context caching:

TypePrice (per 1M tokens)
Input (Cache Hit)$0.028
Input (Cache Miss)$0.28
Output$0.42

Cache Benefits:

  • Automatic context caching (enabled by default)
  • 90% discount on cached input tokens
  • Shared prefix across requests triggers caching

Note: Check DeepSeek pricing for current rates.

Troubleshooting

โ€œRate limit exceededโ€

  • Wait a moment and retry
  • Check your usage limits in the console

Reasoning not showing

  • Ensure youโ€™re using deepseek-reasoner (not deepseek-chat)
  • The thinking mode is optimized for complex queries (math, logic, code)
  • Simple queries may not trigger extended Chain-of-Thought

Slow responses

  • The reasoner model takes longer to โ€œthinkโ€
  • For faster responses, use deepseek-chat
  • OpenAI โ€” Alternative with reasoning models
  • Anthropic โ€” Alternative with extended thinking
  • Ollama โ€” Free local alternative