Meta: LlamaGuard 2 8B
This safeguard model has 8B parameters and is based on the Llama 3 family. Just like is predecessor, [LlamaGuard 1](https://huggingface.co/meta-llama/LlamaGuard-7b), it can do both prompt and response classification. LlamaGuard 2 acts as a normal LLM would, generating text that indicates whether the given input/output is safe/unsafe. If deemed unsafe, it will also share the content categories violated. For best results, please use raw prompt input or the `/completions` endpoint, instead of the chat API. It has demonstrated strong performance compared to leading closed-source models in human evaluations. To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).
Parameters
8B
Context Window
8,192
tokens
Input Price
$0.2
per 1M tokens
Output Price
$0.2
per 1M tokens
Capabilities
Model capabilities and supported modalities
Performance
Good reasoning with solid logical foundations
-
Capable of generating functional code with good practices
-
Modalities
text
text
LLM Price Calculator
Calculate the cost of using this model
Monthly Cost Estimator
Based on different usage levels