AdaptLLM commited on
Commit
b8827d3
·
verified ·
1 Parent(s): af6ae93

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -13,7 +13,7 @@ tags:
13
  - legal
14
  ---
15
 
16
- # Adapting Large Language Models to Domains (ICLR 2024)
17
  This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
18
 
19
  We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
@@ -71,7 +71,7 @@ outputs = model.generate(input_ids=inputs, max_length=2048)[0]
71
  answer_start = int(inputs.shape[-1])
72
  pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
73
 
74
- print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
75
  ```
76
 
77
  ## 2. Domain-Specific Tasks
@@ -97,7 +97,7 @@ You can use the following scripts to reproduce our results and evaluate any othe
97
  DOMAIN='law'
98
 
99
  # Specify any Huggingface model name (Not applicable to chat models)
100
- MODEL='AdaptLLM/law-LLM'
101
 
102
  # Model parallelization:
103
  # - Set MODEL_PARALLEL=False if the model fits on a single GPU.
 
13
  - legal
14
  ---
15
 
16
+ # Adapting LLMs to Domains via Continual Pre-Training (ICLR 2024)
17
  This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
18
 
19
  We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
 
71
  answer_start = int(inputs.shape[-1])
72
  pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
73
 
74
+ print(pred)
75
  ```
76
 
77
  ## 2. Domain-Specific Tasks
 
97
  DOMAIN='law'
98
 
99
  # Specify any Huggingface model name (Not applicable to chat models)
100
+ MODEL='AdaptLLM/law-LLM-13B'
101
 
102
  # Model parallelization:
103
  # - Set MODEL_PARALLEL=False if the model fits on a single GPU.