|
--- |
|
base_model: mattshumer/ref_70_e3 |
|
language: |
|
- en |
|
- de |
|
- fr |
|
- it |
|
- pt |
|
- hi |
|
- es |
|
- th |
|
library_name: transformers |
|
tags: |
|
- facebook |
|
- meta |
|
- pytorch |
|
- llama |
|
- llama-3 |
|
model-index: |
|
- name: fsaudm/Reflection-Llama-3.1-70B-Instruct-NF4 |
|
results: [] |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
This is a quantized version of `Reflection Llama 3.1 70B Instruct`. Quantized to **4-bit** using `bistandbytes` and `accelerate`. |
|
|
|
- **Developed by:** Farid Saud @ DSRS |
|
- **Base Model:** meta-llama/Meta-Llama-3.1-70B-Instruct |
|
|
|
>[!WARNING] |
|
>There is (currently) a lot of controversy with this model's legitimacy, use with caution. |
|
|
|
## Use this model |
|
|
|
|
|
Use a pipeline as a high-level helper: |
|
```python |
|
# Use a pipeline as a high-level helper |
|
from transformers import pipeline |
|
|
|
messages = [ |
|
{"role": "user", "content": "Who are you?"}, |
|
] |
|
pipe = pipeline("text-generation", model="fsaudm/Reflection-Llama-3.1-70B-Instruct-NF4") |
|
pipe(messages) |
|
``` |
|
|
|
|
|
|
|
Load model directly |
|
```python |
|
# Load model directly |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("fsaudm/Reflection-Llama-3.1-70B-Instruct-NF4") |
|
model = AutoModelForCausalLM.from_pretrained("fsaudm/Reflection-Llama-3.1-70B-Instruct-NF4") |
|
``` |
|
|
|
## System Prompt |
|
|
|
``` |
|
The system prompt used for training this model is: |
|
|
|
You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags. |
|
|
|
We recommend using this exact system prompt to get the best results from Reflection 70B. You may also want to experiment combining this system prompt with your own custom instructions to customize the behavior of the model. |
|
``` |
|
|
|
## Chat Format |
|
|
|
As mentioned above, the model uses the standard Llama 3.1 chat format. Here’s an example: |
|
|
|
``` |
|
<|begin_of_text|><|start_header_id|>system<|end_header_id|> |
|
|
|
You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.<|eot_id|><|start_header_id|>user<|end_header_id|> |
|
|
|
what is 2+2?<|eot_id|><|start_header_id|>assistant<|end_header_id|> |
|
``` |
|
|
|
## Tips for Performance |
|
|
|
- We are initially recommending a `temperature` of `.7` and a `top_p` of `.95`. |
|
- For increased accuracy, append `Think carefully.` at the end of your messages. |
|
|
|
|
|
|
|
|