rdyro commited on
Commit
b609628
·
verified ·
1 Parent(s): 92a8d33

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +130 -0
README.md CHANGED
@@ -1,3 +1,133 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - finetuned
6
+ inference: true
7
+ widget:
8
+ - messages:
9
+ - role: user
10
+ content: What is your favorite condiment?
11
  ---
12
+
13
+ # Mistral-7B-Instruct-v0.1 for Flax
14
+
15
+ ## Quickstart
16
+
17
+ ```python
18
+ import jax.numpy as jnp
19
+ import torch
20
+ from transformers import AutoModelForCausalLM, AutoTokenizer
21
+ from transformers import FlaxMistralForCausalLM
22
+
23
+ model = FlaxMistralForCausalLM.from_pretrained("rdyro/Mistral-7B-Instruct-v0.1", dtype=jnp.float32)
24
+
25
+ tokenizer = AutoTokenizer.from_pretrained("rdyro/Mistral-7B-Instruct-v0.1")
26
+
27
+ torch_model_id = "mistralai/Mistral-7B-Instruct-v0.1"
28
+ torch_model = AutoModelForCausalLM.from_pretrained(
29
+ torch_model_id, device_map="cpu", torch_dtype=torch.float32
30
+ )
31
+ torch_tokenizer = AutoTokenizer.from_pretrained(torch_model_id)
32
+ out_jax = model(input_jax)
33
+ ```
34
+
35
+ We can compare the outputs to the original PyTorch version.
36
+
37
+ ```python
38
+ messages = [{"role": "user", "content": "what's your name?"}]
39
+ input_jax = tokenizer.apply_chat_template(messages, return_tensors="jax")
40
+ input_pt = torch_tokenizer.apply_chat_template(messages, return_tensors="pt")
41
+
42
+ with torch.no_grad():
43
+ out_pt = torch_model(input_pt)
44
+
45
+ err = jnp.linalg.norm(jnp.array(out_pt.logits) - out_jax.logits) / jnp.linalg.norm(
46
+ jnp.array(out_pt.logits)
47
+ )
48
+ print(f"Error is numerical precision level: {err:.4e}")
49
+ ```
50
+
51
+ <p align="center">
52
+ Below is the PyTorch version Model Card.
53
+ </p>
54
+
55
+ ---
56
+
57
+ # Model Card for Mistral-7B-Instruct-v0.1
58
+
59
+ The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
60
+
61
+ For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
62
+
63
+ ## Instruction format
64
+
65
+ In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
66
+
67
+ E.g.
68
+ ```
69
+ text = "<s>[INST] What is your favourite condiment? [/INST]"
70
+ "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
71
+ "[INST] Do you have mayonnaise recipes? [/INST]"
72
+ ```
73
+
74
+ This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
75
+
76
+ ```python
77
+ from transformers import AutoModelForCausalLM, AutoTokenizer
78
+
79
+ device = "cuda" # the device to load the model onto
80
+
81
+ model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
82
+ tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
83
+
84
+ messages = [
85
+ {"role": "user", "content": "What is your favourite condiment?"},
86
+ {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
87
+ {"role": "user", "content": "Do you have mayonnaise recipes?"}
88
+ ]
89
+
90
+ encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
91
+
92
+ model_inputs = encodeds.to(device)
93
+ model.to(device)
94
+
95
+ generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
96
+ decoded = tokenizer.batch_decode(generated_ids)
97
+ print(decoded[0])
98
+ ```
99
+
100
+ ## Model Architecture
101
+ This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
102
+ - Grouped-Query Attention
103
+ - Sliding-Window Attention
104
+ - Byte-fallback BPE tokenizer
105
+
106
+ ## Troubleshooting
107
+ - If you see the following error:
108
+ ```
109
+ Traceback (most recent call last):
110
+ File "", line 1, in
111
+ File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
112
+ config, kwargs = AutoConfig.from_pretrained(
113
+ File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
114
+ config_class = CONFIG_MAPPING[config_dict["model_type"]]
115
+ File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
116
+ raise KeyError(key)
117
+ KeyError: 'mistral'
118
+ ```
119
+
120
+ Installing transformers from source should solve the issue
121
+ pip install git+https://github.com/huggingface/transformers
122
+
123
+ This should not be required after transformers-v4.33.4.
124
+
125
+ ## Limitations
126
+
127
+ The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
128
+ It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
129
+ make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
130
+
131
+ ## The Mistral AI Team
132
+
133
+ Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.