timpal0l's picture
Update README.md
7cb041b verified
|
raw
history blame
1.51 kB
---
base_model: AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct
language:
- sv
- da
- 'no'
- en
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.7
tags:
- translation
---
# Model Card for gpt-sw3-6.7b-v2-translator
The `gpt-sw3-6.7b-v2-translator` is a finetuned version of `gpt-sw3-6.7b-v2-instruct` on a carefully selected translation pair dataset that was gathered by AI Sweden.
## How to use:
```python
import torch
from transformers import pipeline, StoppingCriteriaList, StoppingCriteria
device = "cuda" if torch.cuda.is_available() else "cpu"
# (Optional) - define a stopping criteria
# We ideally want the model to stop generate once the response from the Bot is generated
class StopOnTokenCriteria(StoppingCriteria):
def __init__(self, stop_token_id):
self.stop_token_id = stop_token_id
def __call__(self, input_ids, scores, **kwargs):
return input_ids[0, -1] == self.stop_token_id
stop_on_token_criteria = StopOnTokenCriteria(stop_token_id=2)
pipe = pipeline(
"text-generation",
"AI-Sweden-Models/gpt-sw3-6.7b-v2-translator",
device=device
)
text = "I like to eat ice cream in the summer."
prompt = f"<|endoftext|><s>User: Översätt till Svenska från Engelska\n{text}<s>Bot:"
response = pipe(
prompt,
max_length=768,
stopping_criteria=StoppingCriteriaList([stop_on_token_criteria])
)
print(response[0]["generated_text"].split("<s>Bot: ")[-1])
```
```python
>>> "Jag tycker om att äta glass på sommaren."
```
## Dataset: