tobischimanski
commited on
Commit
·
e883690
1
Parent(s):
e9593c8
Update README.md
Browse files
README.md
CHANGED
@@ -8,13 +8,13 @@ tags:
|
|
8 |
- governance
|
9 |
---
|
10 |
|
11 |
-
# Model Card for
|
12 |
|
13 |
## Model Description
|
14 |
|
15 |
Based on [this paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514), this is the GovDistilRoBERTa-governance language model. A language model that is trained to better classify governance texts in the ESG domain.
|
16 |
|
17 |
-
Using the [
|
18 |
|
19 |
## How to Get Started With the Model
|
20 |
You can use the model with a pipeline for text classification:
|
@@ -23,8 +23,8 @@ You can use the model with a pipeline for text classification:
|
|
23 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
|
24 |
import datasets
|
25 |
|
26 |
-
tokenizer_name = "ESGBERT/
|
27 |
-
model_name = "ESGBERT/
|
28 |
|
29 |
model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
30 |
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, max_len=512)
|
|
|
8 |
- governance
|
9 |
---
|
10 |
|
11 |
+
# Model Card for GovernanceBERT-governance
|
12 |
|
13 |
## Model Description
|
14 |
|
15 |
Based on [this paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4622514), this is the GovDistilRoBERTa-governance language model. A language model that is trained to better classify governance texts in the ESG domain.
|
16 |
|
17 |
+
Using the [GovernanceBERT-base](https://huggingface.co/ESGBERT/GovernanceBERT-base) model as a starting point, the GovDistilRoBERTa-governance Language Model is additionally fine-trained on a 2k governance dataset to detect governance text samples.
|
18 |
|
19 |
## How to Get Started With the Model
|
20 |
You can use the model with a pipeline for text classification:
|
|
|
23 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
|
24 |
import datasets
|
25 |
|
26 |
+
tokenizer_name = "ESGBERT/GovernanceBERT-governance"
|
27 |
+
model_name = "ESGBERT/GovernanceBERT-governance"
|
28 |
|
29 |
model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
30 |
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name, max_len=512)
|