LogoTop AI Hubs

DeepSeek: R1 Distill Qwen 7B

Qwen
Text
Paid

DeepSeek-R1-Distill-Qwen-7B is a 7 billion parameter dense language model distilled from DeepSeek-R1, leveraging reinforcement learning-enhanced reasoning data generated by DeepSeek's larger models. The distillation process transfers advanced reasoning, math, and code capabilities into a smaller, more efficient model architecture based on Qwen2.5-Math-7B. This model demonstrates strong performance across mathematical benchmarks (92.8% pass@1 on MATH-500), coding tasks (Codeforces rating 1189), and general reasoning (49.1% pass@1 on GPQA Diamond), achieving competitive accuracy relative to larger models while maintaining smaller inference costs.

Parameters

7B

Context Window

131,072

tokens

Input Price

$0.1

per 1M tokens

Output Price

$0.2

per 1M tokens

Capabilities

Model capabilities and supported modalities

Performance

Reasoning

Excellent reasoning capabilities with strong logical analysis

Math

Strong mathematical capabilities, handles complex calculations well

Coding

Specialized in code generation with excellent programming capabilities

Knowledge

-

Modalities

Input Modalities

text

Output Modalities

text

LLM Price Calculator

Calculate the cost of using this model

$0.000150
$0.000600
Input Cost:$0.000150
Output Cost:$0.000600
Total Cost:$0.000750
Estimated usage: 4,500 tokens

Monthly Cost Estimator

Based on different usage levels

Light Usage
$0.0030
~10 requests
Moderate Usage
$0.0300
~100 requests
Heavy Usage
$0.3000
~1000 requests
Enterprise
$3.0000
~10,000 requests
Note: Estimates based on current token count settings per request.
Last Updated: 1970/01/21