Auto Router
Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output. To see which model was used, visit [Activity](/activity), or read the `model` attribute of the response. Your response will be priced at the same rate as the routed model. The meta-model is powered by [Not Diamond](https://docs.notdiamond.ai/docs/how-not-diamond-works). Learn more in our [docs](/docs/model-routing). Requests will be routed to the following models: - [openai/gpt-5](/openai/gpt-5) - [openai/gpt-5-mini](/openai/gpt-5-mini) - [openai/gpt-5-nano](/openai/gpt-5-nano) - [openai/gpt-4.1-nano](/openai/gpt-4.1-nano) - [openai/gpt-4.1](/openai/gpt-4.1) - [openai/gpt-4.1-mini](/openai/gpt-4.1-mini) - [openai/gpt-4.1](/openai/gpt-4.1) - [openai/gpt-4o-mini](/openai/gpt-4o-mini) - [openai/chatgpt-4o-latest](/openai/chatgpt-4o-latest) - [anthropic/claude-3.5-haiku](/anthropic/claude-3.5-haiku) - [anthropic/claude-opus-4-1](/anthropic/claude-opus-4-1) - [anthropic/claude-sonnet-4-0](/anthropic/claude-sonnet-4-0) - [anthropic/claude-3-7-sonnet-latest](/anthropic/claude-3-7-sonnet-latest) - [google/gemini-2.5-pro](/google/gemini-2.5-pro) - [google/gemini-2.5-flash](/google/gemini-2.5-flash) - [mistral/mistral-large-latest](/mistral/mistral-large-latest) - [mistral/mistral-medium-latest](/mistral/mistral-medium-latest) - [mistral/mistral-small-latest](/mistral/mistral-small-latest) - [mistralai/mistral-nemo](/mistralai/mistral-nemo) - [x-ai/grok-3](/x-ai/grok-3) - [x-ai/grok-3-mini](/x-ai/grok-3-mini) - [x-ai/grok-4](/x-ai/grok-4) - [deepseek/deepseek-r1](/deepseek/deepseek-r1) - [meta-llama/llama-3.1-70b-instruct](/meta-llama/llama-3.1-70b-instruct) - [meta-llama/llama-3.1-405b-instruct](/meta-llama/llama-3.1-405b-instruct) - [mistralai/mixtral-8x22b-instruct](/mistralai/mixtral-8x22b-instruct) - [perplexity/sonar](/perplexity/sonar) - [cohere/command-r-plus](/cohere/command-r-plus) - [cohere/command-r](/cohere/command-r)
Parameters
-
Context Window
2,000,000
tokens
Input Price
$-1000000
per 1M tokens
Output Price
$-1000000
per 1M tokens
Capabilities
Model capabilities and supported modalities
Performance
-
-
-
-
Modalities
text
text
LLM Price Calculator
Calculate the cost of using this model
Monthly Cost Estimator
Based on different usage levels
