Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ tags:
|
|
13 |
- legal
|
14 |
---
|
15 |
|
16 |
-
# Adapting
|
17 |
This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
|
18 |
|
19 |
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
|
@@ -71,7 +71,7 @@ outputs = model.generate(input_ids=inputs, max_length=2048)[0]
|
|
71 |
answer_start = int(inputs.shape[-1])
|
72 |
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
|
73 |
|
74 |
-
print(
|
75 |
```
|
76 |
|
77 |
## 2. Domain-Specific Tasks
|
@@ -97,7 +97,7 @@ You can use the following scripts to reproduce our results and evaluate any othe
|
|
97 |
DOMAIN='law'
|
98 |
|
99 |
# Specify any Huggingface model name (Not applicable to chat models)
|
100 |
-
MODEL='AdaptLLM/law-LLM'
|
101 |
|
102 |
# Model parallelization:
|
103 |
# - Set MODEL_PARALLEL=False if the model fits on a single GPU.
|
|
|
13 |
- legal
|
14 |
---
|
15 |
|
16 |
+
# Adapting LLMs to Domains via Continual Pre-Training (ICLR 2024)
|
17 |
This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
|
18 |
|
19 |
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
|
|
|
71 |
answer_start = int(inputs.shape[-1])
|
72 |
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
|
73 |
|
74 |
+
print(pred)
|
75 |
```
|
76 |
|
77 |
## 2. Domain-Specific Tasks
|
|
|
97 |
DOMAIN='law'
|
98 |
|
99 |
# Specify any Huggingface model name (Not applicable to chat models)
|
100 |
+
MODEL='AdaptLLM/law-LLM-13B'
|
101 |
|
102 |
# Model parallelization:
|
103 |
# - Set MODEL_PARALLEL=False if the model fits on a single GPU.
|