Update README.md
Browse files
README.md
CHANGED
@@ -42,13 +42,12 @@ model-index:
|
|
42 |
This model is a W.I.P
|
43 |
|
44 |
## Model description
|
45 |
-
|
46 |
-
This model is a fine-tuned version of [KBLab/bart-base-swedish-cased](https://huggingface.co/KBLab/bart-base-swedish-cased) on the [Gabriel/bart-base-cnn-swe](https://huggingface.co/datasets/Gabriel/cnn_daily_swe) dataset.
|
47 |
|
48 |
|
49 |
## Intended uses & limitations
|
50 |
|
51 |
-
This model should only be used to fine-tune further on.
|
52 |
|
53 |
## Training procedure
|
54 |
|
|
|
42 |
This model is a W.I.P
|
43 |
|
44 |
## Model description
|
45 |
+
BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. This model is a fine-tuned version of [KBLab/bart-base-swedish-cased](https://huggingface.co/KBLab/bart-base-swedish-cased) on the [Gabriel/bart-base-cnn-swe](https://huggingface.co/datasets/Gabriel/cnn_daily_swe) dataset and can be used for summarization tasks.
|
|
|
46 |
|
47 |
|
48 |
## Intended uses & limitations
|
49 |
|
50 |
+
This model should only be used to fine-tune further on and summarization tasks.
|
51 |
|
52 |
## Training procedure
|
53 |
|