update model card
Browse files
README.md
CHANGED
@@ -1,3 +1,65 @@
|
|
1 |
---
|
2 |
license: llama2
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: llama2
|
3 |
+
language:
|
4 |
+
- it
|
5 |
+
tags:
|
6 |
+
- text-generation-inference
|
7 |
---
|
8 |
+
# Model Card for LLaMAntino-2-7b-ITA
|
9 |
+
|
10 |
+
## Model description
|
11 |
+
|
12 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
13 |
+
|
14 |
+
**LLaMAntino-2-7b** is a *Large Language Model (LLM)* that is an italian-adapted **LLaMA 2**.
|
15 |
+
This model aims to provide Italian NLP researchers with a base model for natural language generation tasks.
|
16 |
+
|
17 |
+
The model was trained using *QLora* and using as training data [clean_mc4_it medium](https://huggingface.co/datasets/gsarti/clean_mc4_it/viewer/medium).
|
18 |
+
If you are interested in more details regarding the training procedure, you can find the code we used at the following link:
|
19 |
+
- **Repository:** https://github.com/swapUniba/LLaMAntino
|
20 |
+
|
21 |
+
**NOTICE**: the code has not been released yet, we apologize for the delay, it will be available asap!
|
22 |
+
|
23 |
+
- **Developed by:** Pierpaolo Basile, Elio Musacchio, Marco Polignano, Lucia Siciliani, Giuseppe Fiameni, Giovanni Semeraro
|
24 |
+
- **Funded by:** PNRR project FAIR - Future AI Research
|
25 |
+
- **Compute infrastructure:** [Leonardo](https://www.hpc.cineca.it/systems/hardware/leonardo/) supercomputer
|
26 |
+
- **Model type:** LLaMA 2
|
27 |
+
- **Language(s) (NLP):** Italian
|
28 |
+
- **License:** Llama 2 Community License
|
29 |
+
- **Finetuned from model:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
30 |
+
|
31 |
+
## How to Get Started with the Model
|
32 |
+
|
33 |
+
Below you can find an example of model usage:
|
34 |
+
|
35 |
+
```python
|
36 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
37 |
+
|
38 |
+
model_id = "swap-uniba/LLaMAntino-2-7b-hf-ITA"
|
39 |
+
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
41 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
42 |
+
|
43 |
+
prompt = "Scrivi qui un possibile prompt"
|
44 |
+
|
45 |
+
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
|
46 |
+
outputs = model.generate(input_ids=input_ids)
|
47 |
+
|
48 |
+
print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, input_ids.shape[1]:], skip_special_tokens=True)[0])
|
49 |
+
```
|
50 |
+
|
51 |
+
If you are facing issues when loading the model, you can try to load it quantized:
|
52 |
+
|
53 |
+
```python
|
54 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True)
|
55 |
+
```
|
56 |
+
|
57 |
+
*Note*: The model loading strategy above requires the [*bitsandbytes*](https://pypi.org/project/bitsandbytes/) and [*accelerate*](https://pypi.org/project/accelerate/) libraries
|
58 |
+
|
59 |
+
## Citation
|
60 |
+
|
61 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
62 |
+
|
63 |
+
If you use this model in your research, please cite the following:
|
64 |
+
|
65 |
+
*Coming soon*!
|