LogoTop AI Hubs

Auto Router

Router
Text
Free

Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output. To see which model was used, visit [Activity](/activity), or read the `model` attribute of the response. Your response will be priced at the same rate as the routed model. The meta-model is powered by [Not Diamond](https://docs.notdiamond.ai/docs/how-not-diamond-works). Learn more in our [docs](/docs/model-routing). Requests will be routed to the following models: - [openai/gpt-5](/openai/gpt-5) - [openai/gpt-5-mini](/openai/gpt-5-mini) - [openai/gpt-5-nano](/openai/gpt-5-nano) - [openai/gpt-4.1-nano](/openai/gpt-4.1-nano) - [openai/gpt-4.1](/openai/gpt-4.1) - [openai/gpt-4.1-mini](/openai/gpt-4.1-mini) - [openai/gpt-4.1](/openai/gpt-4.1) - [openai/gpt-4o-mini](/openai/gpt-4o-mini) - [openai/chatgpt-4o-latest](/openai/chatgpt-4o-latest) - [anthropic/claude-3.5-haiku](/anthropic/claude-3.5-haiku) - [anthropic/claude-opus-4-1](/anthropic/claude-opus-4-1) - [anthropic/claude-sonnet-4-0](/anthropic/claude-sonnet-4-0) - [anthropic/claude-3-7-sonnet-latest](/anthropic/claude-3-7-sonnet-latest) - [google/gemini-2.5-pro](/google/gemini-2.5-pro) - [google/gemini-2.5-flash](/google/gemini-2.5-flash) - [mistral/mistral-large-latest](/mistral/mistral-large-latest) - [mistral/mistral-medium-latest](/mistral/mistral-medium-latest) - [mistral/mistral-small-latest](/mistral/mistral-small-latest) - [mistralai/mistral-nemo](/mistralai/mistral-nemo) - [x-ai/grok-3](/x-ai/grok-3) - [x-ai/grok-3-mini](/x-ai/grok-3-mini) - [x-ai/grok-4](/x-ai/grok-4) - [deepseek/deepseek-r1](/deepseek/deepseek-r1) - [meta-llama/llama-3.1-70b-instruct](/meta-llama/llama-3.1-70b-instruct) - [meta-llama/llama-3.1-405b-instruct](/meta-llama/llama-3.1-405b-instruct) - [mistralai/mixtral-8x22b-instruct](/mistralai/mixtral-8x22b-instruct) - [perplexity/sonar](/perplexity/sonar) - [cohere/command-r-plus](/cohere/command-r-plus) - [cohere/command-r](/cohere/command-r)

Parameters

-

Context Window

2,000,000

tokens

Input Price

$-1000000

per 1M tokens

Output Price

$-1000000

per 1M tokens

Capabilities

Model capabilities and supported modalities

Performance

Reasoning

-

Math

-

Coding

-

Knowledge

-

Modalities

Input Modalities

text

Output Modalities

text

LLM Price Calculator

Calculate the cost of using this model

$-1500.000000
$-3000.000000
Input Cost:$-1500.000000
Output Cost:$-3000.000000
Total Cost:$-4500.000000
Estimated usage: 4,500 tokens

Monthly Cost Estimator

Based on different usage levels

Light Usage
$-20000.0000
~10 requests
Moderate Usage
$-200000.0000
~100 requests
Heavy Usage
$-2000000.0000
~1000 requests
Enterprise
$-20000000.0000
~10,000 requests
Note: Estimates based on current token count settings per request.
Last Updated: 1970/01/20