IIC
/

Text Generation
Transformers
Safetensors
Spanish
qwen2
chat
conversational
text-generation-inference
Inference Endpoints
gonzalo-santamaria-iic commited on
Commit
bb22957
·
verified ·
1 Parent(s): c6e1f75

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -3
README.md CHANGED
@@ -38,7 +38,7 @@ Remarkably, this model was trained on a single A100 GPU with limited computation
38
 
39
  ## How to Get Started with the Model
40
 
41
- - To load model and tokenizer:
42
 
43
  ```python
44
  from transformers import (
@@ -47,7 +47,7 @@ from transformers import (
47
  )
48
  import torch
49
 
50
- model_name = "ignita/RigoChat-7b-v2"
51
 
52
  model = AutoModelForCausalLM.from_pretrained(
53
  model_name,
@@ -58,8 +58,35 @@ model = AutoModelForCausalLM.from_pretrained(
58
  tokenizer = AutoTokenizer.from_pretrained(
59
  model_name,
60
  trust_remote_code=True
61
- ))))))))))))))))))))))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
 
63
 
64
  ## Training Details
65
 
 
38
 
39
  ## How to Get Started with the Model
40
 
41
+ ### To load the model and tokenizer:
42
 
43
  ```python
44
  from transformers import (
 
47
  )
48
  import torch
49
 
50
+ model_name = "IIC/RigoChat-7b-v2"
51
 
52
  model = AutoModelForCausalLM.from_pretrained(
53
  model_name,
 
58
  tokenizer = AutoTokenizer.from_pretrained(
59
  model_name,
60
  trust_remote_code=True
61
+ )
62
+ ```
63
+
64
+ ### Sample generation
65
+
66
+ ```python
67
+ messages = [
68
+ {"role": "user", "content": "¿Cómo puedo transformar un diccionario de listas en una lista de diccionarios en Python?"}
69
+ ]
70
+
71
+ text = tokenizer.apply_chat_template(
72
+ messages,
73
+ tokenize=False,
74
+ add_generation_prompt=True
75
+ )
76
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
77
+
78
+ generated_ids = model.generate(
79
+ **model_inputs,
80
+ max_new_tokens=512
81
+ )
82
+ generated_ids = [
83
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
84
+ ]
85
+
86
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
87
+ ```
88
 
89
+ ### Tool Use
90
 
91
  ## Training Details
92