Pavarissy commited on
Commit
fed11ce
·
verified ·
1 Parent(s): 4a9fbef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -1
README.md CHANGED
@@ -5,10 +5,12 @@ license: apache-2.0
5
  tags:
6
  - trl
7
  - sft
8
- - generated_from_trainer
9
  model-index:
10
  - name: merlinite-sql-7b-thai-instructlab
11
  results: []
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -22,6 +24,38 @@ This model is a fine-tuned version of [instructlab/merlinite-7b-lab](https://hug
22
 
23
  More information needed
24
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ## Intended uses & limitations
26
 
27
  More information needed
 
5
  tags:
6
  - trl
7
  - sft
 
8
  model-index:
9
  - name: merlinite-sql-7b-thai-instructlab
10
  results: []
11
+ language:
12
+ - th
13
+ pipeline_tag: text-generation
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
24
 
25
  More information needed
26
 
27
+ ## How to Use
28
+ installing dependencies
29
+ ```bash
30
+ !pip install -qU transformers accelerate
31
+ ```
32
+
33
+ To implement the model
34
+ ```python
35
+ from transformers import AutoTokenizer
36
+ import transformers
37
+ import torch
38
+
39
+ question = "คะแนนความสามารถทางการเงินสูงสุดสำหรับลูกค้าในแอฟริกาในปี 2022 คือเท่าใด \nHere is a Table: CREATE TABLE financial_capability (id INT, customer_name VARCHAR(50), region VARCHAR(50), score INT, year INT); INSERT INTO financial_capability (id, customer_name, region, score, year) VALUES (1, 'Thabo', 'Africa', 9, 2022), (2, 'Amina', 'Africa', 8, 2022);"
40
+
41
+ model = "Pavarissy/merlinite-sql-7b-thai-instructlab"
42
+ messages = [{"role": "user",
43
+ "content": f"{question}"}]
44
+
45
+ tokenizer = AutoTokenizer.from_pretrained(model)
46
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
47
+ pipeline = transformers.pipeline(
48
+ "text-generation",
49
+ model=model,
50
+ torch_dtype=torch.float16,
51
+ device_map="auto",
52
+ )
53
+
54
+ # this is model generation part
55
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
56
+ print(outputs[0]["generated_text"])
57
+ ```
58
+
59
  ## Intended uses & limitations
60
 
61
  More information needed