sooolee commited on
Commit
43fb8c2
·
1 Parent(s): d654f89

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -16
README.md CHANGED
@@ -7,22 +7,20 @@ language:
7
  metrics:
8
  - rouge
9
  library_name: transformers
10
- pipeline_tag: summarization
11
  model-index:
12
- - name: bart-large-cnn
13
- results:
14
- - task:
15
- name: Summarization
16
- type: summarization
17
- dataset:
18
- name: samsum
19
- type: samsum
20
- split: validation
21
- metrics:
22
- - name: Rouge1
23
- type: rouge
24
- value: 43.115
25
-
26
  ---
27
 
28
  # bart-large-cnn-finetuned-samsum-lora
@@ -32,7 +30,7 @@ This model is a further fine-tuned version of [facebook/bart-large-cnn](https://
32
  ## Model description
33
 
34
  * This model further finetuned 'bart-large-cnn' on the more conversational samsum dataset.
35
- * LoRA (r = 8) was used to further reduced the model size. Only less than 1.2M parameters were trained (0.23% of original bart-large 510M parameters).
36
  * The model checkpoint is less than 5MB.
37
 
38
  ## Intended uses & limitations
 
7
  metrics:
8
  - rouge
9
  library_name: transformers
 
10
  model-index:
11
+ - name: bart-large-cnn
12
+ results:
13
+ - task:
14
+ name: Summarization
15
+ type: summarization
16
+ dataset:
17
+ name: samsum
18
+ type: samsum
19
+ split: validation
20
+ metrics:
21
+ - name: Rouge1
22
+ type: rouge
23
+ value: 43.115
 
24
  ---
25
 
26
  # bart-large-cnn-finetuned-samsum-lora
 
30
  ## Model description
31
 
32
  * This model further finetuned 'bart-large-cnn' on the more conversational samsum dataset.
33
+ * Huggingface [PEFT Library](https://github.com/huggingface/peft) LoRA (r = 8) was used to further reduced the model size. Less than 1.2M parameters were trained (0.23% of original bart-large 510M parameters).
34
  * The model checkpoint is less than 5MB.
35
 
36
  ## Intended uses & limitations