|
--- |
|
license: apache-2.0 |
|
tags: |
|
- merge |
|
- mergekit |
|
- wanglab/ClinicalCamel-70B |
|
- epfl-llm/meditron-70b |
|
- allenai/tulu-2-dpo-70b |
|
base_model: |
|
- NousResearch/Llama-2-70b-hf |
|
- allenai/tulu-2-dpo-70b |
|
--- |
|
|
|
# Medmerge-tulu-70b |
|
|
|
Medmerge-tulu-70b is a merge of the following models: |
|
* [wanglab/ClinicalCamel-70B](https://huggingface.co/wanglab/ClinicalCamel-70B) |
|
* [epfl-llm/meditron-70b](https://huggingface.co/epfl-llm/meditron-70b) |
|
* [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b) |
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
models: |
|
- model: NousResearch/Llama-2-70b-hf |
|
# no parameters necessary for base model |
|
- model: wanglab/ClinicalCamel-70B |
|
parameters: |
|
weight: 0.08 |
|
density: 0.45 |
|
- model: epfl-llm/meditron-70b |
|
parameters: |
|
weight: 0.08 |
|
density: 0.45 |
|
- model: allenai/tulu-2-dpo-70b |
|
parameters: |
|
weight: 0.08 |
|
density: 0.45 |
|
merge_method: dare_ties |
|
base_model: NousResearch/Llama-2-70b-hf |
|
parameters: |
|
int8_mask: true |
|
dtype: bfloat16 |
|
``` |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "Technoculture/Medmerge-tulu-70b" |
|
messages = [{"role": "user", "content": "I am feeling sleepy these days"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |