Text Generation
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints
mfromm commited on
Commit
831b8b0
·
verified ·
1 Parent(s): 6666131

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -37,8 +37,7 @@ license: other
37
 
38
 
39
  [Teuken-7B-instruct-research-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) is an instruction-tuned 7B parameter multilingual large language model (LLM) pre-trained with 4T tokens within the research project [OpenGPT-X](https://opengpt-x.de).
40
- The base model [Teuken-7B-base-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-base-v0.4) is available on request.
41
-
42
 
43
 
44
  ### Model Description
@@ -72,7 +71,7 @@ The model is not intended for use in math and coding tasks.
72
 
73
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
74
 
75
- [Teuken-7B-instruct-research-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) is an instruction-tuned version of [Teuken-7B-base-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-base-v0.4) (base model is available on request) that is not completely free from biases and hallucinations.
76
 
77
  ## How to Get Started with the Model
78
 
@@ -191,7 +190,7 @@ For German data we include the complete data sets from the given table:
191
  ### Training Procedure
192
 
193
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
194
- Instruction fined tuned version of [Teuken-7B-base-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-base-v0.4).
195
 
196
  More information regarding the pre-training are available in our model preprint ["Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs"](https://arxiv.org/abs/2410.03730).
197
 
 
37
 
38
 
39
  [Teuken-7B-instruct-research-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) is an instruction-tuned 7B parameter multilingual large language model (LLM) pre-trained with 4T tokens within the research project [OpenGPT-X](https://opengpt-x.de).
40
+ The base model Teuken-7B-base-v0.4 is available on 📧 [request](contact@opengpt-x.de).
 
41
 
42
 
43
  ### Model Description
 
71
 
72
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
73
 
74
+ [Teuken-7B-instruct-research-v0.4](https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4) is an instruction-tuned version of Teuken-7B-base-v0.4 (base model is available on 📧 [request]([email protected])) that is not completely free from biases and hallucinations.
75
 
76
  ## How to Get Started with the Model
77
 
 
190
  ### Training Procedure
191
 
192
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
193
+ Instruction fined tuned version of Teuken-7B-base-v0.4.
194
 
195
  More information regarding the pre-training are available in our model preprint ["Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs"](https://arxiv.org/abs/2410.03730).
196