File size: 2,878 Bytes
7853487
 
53e4a28
 
 
 
 
6674f66
 
 
 
 
 
 
7853487
53e4a28
 
 
9bc2686
 
 
53e4a28
43a7ae0
 
 
3d1e6ef
285c1c7
 
 
53e4a28
 
9e267cf
 
53e4a28
4a65216
 
 
c4eef06
 
 
 
 
3c1e584
c4eef06
 
 
6674f66
53e4a28
 
bad6273
163748b
 
 
 
 
 
 
 
bad6273
163748b
bad6273
5b298e9
 
 
 
bad6273
 
9e267cf
d29dfbc
9e267cf
 
 
 
3d1e6ef
53e4a28
 
 
 
 
 
 
 
 
 
 
d29dfbc
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: cc-by-nc-2.0
datasets:
- cosimoiaia/Loquace-102k
language:
- it
pipeline_tag: conversational
tags:
- alpaca
- llama
- llm
- finetune
- Italian
- qlora
---

Model Card for Loquace-12B

# ๐Ÿ‡ฎ๐Ÿ‡น Loquace-12B ๐Ÿ‡ฎ๐Ÿ‡น 

An exclusively Italian speaking, instruction finetuned, Large Language model. ๐Ÿ‡ฎ๐Ÿ‡น

The Loquace Italian LLM models are created as a proof-of-concept to evaluate on how language tuning can be achieved using QLoRa by instruct-tunings foundational LLMs 
using dataset of a specific language.


The QLoRa (https://github.com/artidoro/qlora) method of fine-tuning significantly lower the resources requirements compared to any other methods available, 
this allow to easily execute the process on significanly larger dataset while still using consumers GPUs and still achieve high accuracy.

## Model Description

Loquace-12B is the first 12B italian Large Language Model trained using QLoRa on a large dataset of 102k question/answer pairs
exclusively in Italian.

The related code can be found at:
https://github.com/cosimoiaia/Loquace


Loquace-12B is part of the big Loquace family:

https://huggingface.co/cosimoiaia/Loquace-70m   -   Based on pythia-70m
https://huggingface.co/cosimoiaia/Loquace-410m  -   Based on pythia-410m
https://huggingface.co/cosimoiaia/Loquace-7B    -   Based on Falcon-7B
https://huggingface.co/cosimoiaia/Loquace-12B   -   Based on pythia-12B
https://huggingface.co/cosimoiaia/Loquace-20B   -   Based on gpt-neox-20B

## Usage


```python
from transformers import (
    AutoTokenizer,
    AutoModelForCausalLM,
    BitsAndBytesConfig
)

tokenizer = AutoTokenizer.from_pretrained("cosimoiaia/Loquace-12B", padding_side="right", use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
    "cosimoiaia/Loquace-12B",
    load_in_8bit=True,
    device_map="auto",
    quantization_config=BitsAndBytesConfig(
      load_in_4bit=True,
      llm_int8_has_fp16_weight=False
    )
)
```


## Training

Loquace-12B was trained on a conversational dataset comprising 102k question/answer pairs in Italian language. 
The training data was constructed by putting together translations from the original alpaca Dataset and other sources like the OpenAssistant dataset.
The model was trained for only 3000 iterations and took 18 hours on 4 RTX 3090, kindly provided by Genesis Cloud. (https://gnsiscld.co/26qhlf)

## Limitations

- Loquace-12B may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.
- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.
- The training data primarily consists of conversational examples and may not generalize well to other types of tasks or domains.

## Dependencies

- PyTorch
- Transformers library by Hugging Face
- Bitsandbites
- QLoRa