File size: 1,829 Bytes
cdfc926
 
 
 
 
 
34eb9a9
 
 
 
 
 
 
 
 
 
cdfc926
34eb9a9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cdfc926
34eb9a9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
base_model:
- arcee-ai/sec-mistral-7b-instruct-1.6-epoch
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
library_name: transformers
tags:
- code
- instruct
- llm
- 7b
- dolphin
license: apache-2.0
datasets:
- cognitivecomputations/dolphin
language:
- en
---
  # Dolphin Mistral Instruct

  This is a custom language model created using the "SLERP" method

  ### Models based on

  The following models were used to create this language model:

  - [arcee-ai/sec-mistral-7b-instruct-1.6-epoch](https://huggingface.co/arcee-ai/sec-mistral-7b-instruct-1.6-epoch)
  - [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)

  ### Configuration

  The following configuration was used to produce this model:

  ```yaml
  base_model:
  - arcee-ai/sec-mistral-7b-instruct-1.6-epoch
  - cognitivecomputations/dolphin-2.8-mistral-7b-v02

  library_name: transformers

  dtype: bfloat16
  ```

## Usage
This model uses SafeTensors files and can be loaded and used with the Transformers library. Here's an example of how to load and generate text with the model using Transformers and Python:
```
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "path/to/model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")

input_text = "Write a short story about"
input_ids = tokenizer.encode(input_text, return_tensors="pt").to(model.device)

output_ids = model.generate(
    input_ids,
    max_length=200,
    do_sample=True,
    top_k=50,
    top_p=0.95,
    num_return_sequences=1,
)

output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
```
Make sure to replace "path/to/model" with the actual path to your model's directory.