Text Generation
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints
mfromm commited on
Commit
f958f36
·
verified ·
1 Parent(s): 9e78cf8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -33,9 +33,9 @@ base_model:
33
  - openGPT-X/Teuken-7B-base-v0.4
34
  license: apache-2.0
35
  ---
36
- # Model Card for HalloEurope-7B-Instruct
37
 
38
- Teuken-7B-chat-v0.4 is an instruction-tuned version of Teuken-7B-base-v0.4.
39
 
40
 
41
  ### Model Description
@@ -51,7 +51,7 @@ Teuken-7B-chat-v0.4 is an instruction-tuned version of Teuken-7B-base-v0.4.
51
  ## Uses
52
 
53
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
54
- Teuken-7B-chat-v0.4 is intended for commercial and research use in all official 24 European languages. Since Teuken-7B-chat-v0.4 focuses on covering all 24 EU languages, it renders more stable results across these languages and better reflects European values in its answers than English-centric models. It is therefore specialized for use in multilingual tasks.
55
 
56
  ### Out-of-Scope Use
57
 
@@ -63,7 +63,7 @@ The model is not intended for use in math and coding tasks.
63
 
64
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
 
66
- Teuken-7B-chat-v0.4 is an instruction-tuned version of Teuken-7B-base-v0.4 that is not completely free from biases and hallucinations.
67
 
68
  ## How to Get Started with the Model
69
 
@@ -89,7 +89,7 @@ prompt = f"System: {system_messages[lang_code]}\nUser: {user}\nAssistant:<s>"
89
  import torch
90
  from transformers import AutoModelForCausalLM, AutoTokenizer
91
 
92
- model_name = "openGPT-X/Teuken-7B-chat-v0.4"
93
  device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
94
  tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
95
  model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.bfloat16).to(device)
 
33
  - openGPT-X/Teuken-7B-base-v0.4
34
  license: apache-2.0
35
  ---
36
+ # Model Card for Teuken-7B-instruct-v0.4
37
 
38
+ Teuken-7B-instruct-v0.4 is an instruction-tuned version of Teuken-7B-base-v0.4.
39
 
40
 
41
  ### Model Description
 
51
  ## Uses
52
 
53
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
54
+ Teuken-7B-instruct-v0.4 is intended for commercial and research use in all official 24 European languages. Since Teuken-7B-chat-v0.4 focuses on covering all 24 EU languages, it renders more stable results across these languages and better reflects European values in its answers than English-centric models. It is therefore specialized for use in multilingual tasks.
55
 
56
  ### Out-of-Scope Use
57
 
 
63
 
64
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
 
66
+ Teuken-7B-instruct-v0.4 is an instruction-tuned version of Teuken-7B-base-v0.4 that is not completely free from biases and hallucinations.
67
 
68
  ## How to Get Started with the Model
69
 
 
89
  import torch
90
  from transformers import AutoModelForCausalLM, AutoTokenizer
91
 
92
+ model_name = "openGPT-X/Teuken-7B-instruct-v0.4"
93
  device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
94
  tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
95
  model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.bfloat16).to(device)