jcfneto commited on
Commit
104989a
·
1 Parent(s): 52e0a62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -38
README.md CHANGED
@@ -6,52 +6,33 @@ model-index:
6
  results: []
7
  ---
8
 
9
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
10
- probably proofread and complete it, then remove this comment. -->
11
-
12
- # bert-tv-portuguese
13
-
14
- This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
- It achieves the following results on the evaluation set:
16
- - Train Loss: 2.3734
17
- - Epoch: 8
18
 
19
  ## Model description
20
 
21
- More information needed
22
-
23
- ## Intended uses & limitations
24
-
25
- More information needed
26
-
27
- ## Training and evaluation data
28
-
29
- More information needed
30
-
31
- ## Training procedure
32
-
33
- ### Training hyperparameters
34
 
35
- The following hyperparameters were used during training:
36
- - optimizer: {'name': 'AdamW', 'weight_decay': 0.004, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.000102, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
37
- - training_precision: float32
38
 
39
- ### Training results
 
 
 
 
 
40
 
41
- | Train Loss | Epoch |
42
- |:----------:|:-----:|
43
- | 6.3513 | 0 |
44
- | 4.4775 | 1 |
45
- | 3.5899 | 2 |
46
- | 3.1742 | 3 |
47
- | 2.9347 | 4 |
48
- | 2.7461 | 5 |
49
- | 2.5957 | 6 |
50
- | 2.4721 | 7 |
51
- | 2.3734 | 8 |
52
 
 
 
 
 
 
53
 
54
- ### Framework versions
55
 
56
  - Transformers 4.27.3
57
  - TensorFlow 2.11.1
 
6
  results: []
7
  ---
8
 
9
+ # BERT-TV
 
 
 
 
 
 
 
 
10
 
11
  ## Model description
12
 
13
+ BERT-TV is a BERT model specifically pre-trained from scratch on a dataset of television reviews in Brazilian Portuguese.
14
+ This model is tailored to grasp the nuances and specificities associated with the context and sentiment expressed in
15
+ television reviews. BERT-TV features 6 layers, 12 attention heads, and an embedding dimension of 768, making it adept at
16
+ handling NLP tasks related to television content in Portuguese.
 
 
 
 
 
 
 
 
 
17
 
18
+ ## Usage ideas
 
 
19
 
20
+ - Sentiment analysis on television reviews in Portuguese
21
+ - Recommender systems for television models in Portuguese
22
+ - Text classification for different television brands and types in Portuguese
23
+ - Named entity recognition in television-related contexts in Portuguese
24
+ - Aspect extraction for features and specifications of televisions in Portuguese
25
+ - Text generation for summarizing television reviews in Portuguese
26
 
27
+ ## Limitations and bias
 
 
 
 
 
 
 
 
 
 
28
 
29
+ As the BERT-TV model is exclusively pre-trained on television reviews in Brazilian Portuguese, its performance may be
30
+ limited when applied to other types of text or reviews in different languages. Furthermore, the model could inherit
31
+ biases present in the training data, which may influence its predictions or embeddings. The tokenizer is adapted from
32
+ the BERTimbau tokenizer, optimized for Brazilian Portuguese, thus it might not deliver optimal results with other
33
+ languages or Portuguese dialects.
34
 
35
+ ## Framework versions
36
 
37
  - Transformers 4.27.3
38
  - TensorFlow 2.11.1