LogoTop AI Hubs

Perplexity: R1 1776

DeepSeek
Text
Paid

R1 1776 is a version of DeepSeek-R1 that has been post-trained to remove censorship constraints related to topics restricted by the Chinese government. The model retains its original reasoning capabilities while providing direct responses to a wider range of queries. R1 1776 is an offline chat model that does not use the perplexity search subsystem. The model was tested on a multilingual dataset of over 1,000 examples covering sensitive topics to measure its likelihood of refusal or overly filtered responses. [Evaluation Results](https://cdn-uploads.huggingface.co/production/uploads/675c8332d01f593dc90817f5/GiN2VqC5hawUgAGJ6oHla.png) Its performance on math and reasoning benchmarks remains similar to the base R1 model. [Reasoning Performance](https://cdn-uploads.huggingface.co/production/uploads/675c8332d01f593dc90817f5/n4Z9Byqp2S7sKUvCvI40R.png) Read more on the [Blog Post](https://perplexity.ai/hub/blog/open-sourcing-r1-1776)

Parameters

-

Context Window

128,000

tokens

Input Price

$2

per 1M tokens

Output Price

$8

per 1M tokens

Capabilities

Model capabilities and supported modalities

Performance

Reasoning

Excellent reasoning capabilities with strong logical analysis

Math

Strong mathematical capabilities, handles complex calculations well

Coding

-

Knowledge

-

Modalities

Input Modalities

text

Output Modalities

text

LLM Price Calculator

Calculate the cost of using this model

$0.003000
$0.024000
Input Cost:$0.003000
Output Cost:$0.024000
Total Cost:$0.027000
Estimated usage: 4,500 tokens

Monthly Cost Estimator

Based on different usage levels

Light Usage
$0.1000
~10 requests
Moderate Usage
$1.0000
~100 requests
Heavy Usage
$10.0000
~1000 requests
Enterprise
$100.0000
~10,000 requests
Note: Estimates based on current token count settings per request.
Last Updated: 2025/05/06