Mistral: Mixtral 8x7B Instruct
Mistral
Text
Paid
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe
Parameters
~47B
Context Window
32,768
tokens
Input Price
$0.08
per 1M tokens
Output Price
$0.24
per 1M tokens
Capabilities
Model capabilities and supported modalities
Performance
Reasoning
Good reasoning with solid logical foundations
Math
-
Coding
Capable of generating functional code with good practices
Knowledge
-
Modalities
Input Modalities
text
Output Modalities
text
LLM Price Calculator
Calculate the cost of using this model
$0.000120
$0.000720
Input Cost:$0.000120
Output Cost:$0.000720
Total Cost:$0.000840
Estimated usage: 4,500 tokens
Monthly Cost Estimator
Based on different usage levels
Light Usage
$0.0032
~10 requests
Moderate Usage
$0.0320
~100 requests
Heavy Usage
$0.3200
~1000 requests
Enterprise
$3.2000
~10,000 requests
Note: Estimates based on current token count settings per request.
Last Updated: 2025/05/06