mini1013 commited on
Commit
fe7d991
·
verified ·
1 Parent(s): e88fb7f

Push model using huggingface_hub.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,240 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mini1013/master_domain
3
+ library_name: setfit
4
+ metrics:
5
+ - metric
6
+ pipeline_tag: text-classification
7
+ tags:
8
+ - setfit
9
+ - sentence-transformers
10
+ - text-classification
11
+ - generated_from_setfit_trainer
12
+ widget:
13
+ - text: '[갤러리아] [비비안][여]무봉제 햄팬티 3매입세트(BP0701)(타임월드) 3매세트_100 한화갤러리아(주)'
14
+ - text: 하프클럽/크로커다일 이너웨어 심리스 퓨징 감탄브라 1+1 크림+베이지 1_사이즈 하프클럽
15
+ - text: (신세계김해점)오르시떼 여성 C221 나시아 긴소매 원피스 L 신세계백화점
16
+ - text: '[크로커다일 언더웨어][크로커다일] 라이크라 쉘론 몰드부착 V넥 스트랍 감탄브라 1종 택1 09.CDWBR4M09T 스트랍 라이트그린_XL '
17
+ - text: 수정이네 데일리 베이직 나시탑 MFNC-030037 블랙/FREE 이궁이네
18
+ inference: true
19
+ model-index:
20
+ - name: SetFit with mini1013/master_domain
21
+ results:
22
+ - task:
23
+ type: text-classification
24
+ name: Text Classification
25
+ dataset:
26
+ name: Unknown
27
+ type: unknown
28
+ split: test
29
+ metrics:
30
+ - type: metric
31
+ value: 0.6911114499161457
32
+ name: Metric
33
+ ---
34
+
35
+ # SetFit with mini1013/master_domain
36
+
37
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
38
+
39
+ The model has been trained using an efficient few-shot learning technique that involves:
40
+
41
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
42
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
43
+
44
+ ## Model Details
45
+
46
+ ### Model Description
47
+ - **Model Type:** SetFit
48
+ - **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
49
+ - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
50
+ - **Maximum Sequence Length:** 512 tokens
51
+ - **Number of Classes:** 10 classes
52
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
53
+ <!-- - **Language:** Unknown -->
54
+ <!-- - **License:** Unknown -->
55
+
56
+ ### Model Sources
57
+
58
+ - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
59
+ - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
60
+ - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
61
+
62
+ ### Model Labels
63
+ | Label | Examples |
64
+ |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
65
+ | 6.0 | <ul><li>'DBS7012 BYC 보디히트 발열 여자 반팔 티셔츠 내의 라이트스킨_85 에이치앤비 주식회사'</li><li>'바풀 융털기모 3부 속바지 드로즈 힙 워머 (90) MG 1911 3 wip 재색_90 _ F (주)에스비아이이너웨어'</li><li>'잔잔한 꽃프린트 반팔 3부 내의 LG7451 블루_100 '</li></ul> |
66
+ | 8.0 | <ul><li>'여성 D236 모드 민소매 원피스 OR24MFMBD236 1.S 롯데백화점'</li><li>'BYC 커플 잠옷 세트 가을 겨울 파자마 바지 체크 남성 여성 빅사이즈 수면 피치 기모 극세사 110 주니어 큰 1_MHS4615_L (95~100) 라브라'</li><li>'여 극세사 10부 파자마 팬츠 핑크 J203402010 215648 핑크_M 에스텍'</li></ul> |
67
+ | 2.0 | <ul><li>'인피티지 집업 올블랙 하이서포트 스포츠브라 70DD 네모난오렌지2'</li><li>'CALVIN KLEIN UNDERWEAR 여성 모던코튼 리프트 브라렛_QF5490100 화이트_L 에스제이글로벌'</li><li>'백온 로고밴드 삼각팬티NXWOU8941/세컨스킨 블랙_FREE 롯데쇼핑(주)'</li></ul> |
68
+ | 1.0 | <ul><li>'[ ] 파워시리즈 하이웨스트 미드따이 중간보정 거들 XL(10398R) VERY BLACK_XL (주)씨제이이엔엠'</li><li>'디즈니 남아 여아 의류 가을 겨울 바지 2 피스/세트 Style 7 Style 10_100 크로노스직구'</li><li>'[스팽스](신세계강남점) TYT 2.0 보정 탱크 (10258R) CHAMPAGNE BEIGE_S 주식회사 에스에스지닷컴'</li></ul> |
69
+ | 4.0 | <ul><li>'여성 홈웨어 이너웨어 속바지 3부 쫄바지 짧은 레깅스 화이트_L 지에이치글로벌'</li><li>'여자 기모 밍크 속바지 겨울 교복 융속바지 블랙_FREE 제이스'</li><li>'[제임스딘] 국내산 여성 여자 텐셀 2부 속바지 JHWDT025 베이지_85 속옷세상'</li></ul> |
70
+ | 3.0 | <ul><li>'하프클럽/핏미인 핏미인 라이크라 풀커버맥스 노와이어 여성속옷세트 16종 MinSellAmount 하프클럽'</li><li>'[현대백화점][세컨스킨] NXWOU2011 2021년 노와이어 천연 뱀부 베이직 캐미브라 BLACK /55∼77 (주)현대백화점'</li><li>'[최초가 179 900원]비비안 스킨핏 FREE FIT V71 [0005]80 B CJONSTYLE_LIVE'</li></ul> |
71
+ | 7.0 | <ul><li>'남성용 와이셔츠 잡아주는 가터벨트 2p세트 김상민'</li><li>'[ch4]삼각 브라패드 수영복 방수 수영복 비키니 볼륨업 도담도담몰'</li><li>'셔츠 가터벨트 와이셔츠 고정 빠짐방지 벨트(2P한세트) 셔츠 가터벨트(2P한세트) 홍스몰'</li></ul> |
72
+ | 0.0 | <ul><li>'[BYC본사]환타쟈 끈런닝16호 BYT3634 BK(검정색)_095 GSSHOP_'</li><li>'비너스자스민 여성 끈 나시 면스판 베이직 여자 런닝 JLG4506 살구(스킨)_90 아이보리shop'</li><li>'럭센스언더웨어 인견 쿨 노와이어 몰드 브라런닝 LU3007 BK_블랙_90A 주식회사 위드투윤'</li></ul> |
73
+ | 9.0 | <ul><li>'레이프릴 데일리 면스판 보정팬티 10종 90 쇼핑엔티'</li><li>'[트라이엄프](대전신세계)[Sioggi]슬로기 프리미엄 면스판 MIDI 데일리팬티 블랙 (TS76474/04) M/90 주식회사 에스에스지닷컴'</li><li>'[barbara](신세계강남점)1926 데일리 노라인 햄팬티 8종 세트(ABP5021SET) 100 주식회사 에스에스지닷컴'</li></ul> |
74
+ | 5.0 | <ul><li>'이벤트속옷 섹시 옆트임 슬립 란제리 야한 빅사이즈 원피스 잠옷 크리스마스속옷 메모리포인트'</li><li>'여성 빅사이즈 이벤트 속옷 섹시 슬립 망사 란제리 앤브리사'</li><li>'여성 미니 롱 슬립 인견 모달 이너 끈 원피스 속치마 여름 잠옷 라이크라라'</li></ul> |
75
+
76
+ ## Evaluation
77
+
78
+ ### Metrics
79
+ | Label | Metric |
80
+ |:--------|:-------|
81
+ | **all** | 0.6911 |
82
+
83
+ ## Uses
84
+
85
+ ### Direct Use for Inference
86
+
87
+ First install the SetFit library:
88
+
89
+ ```bash
90
+ pip install setfit
91
+ ```
92
+
93
+ Then you can load this model and run inference.
94
+
95
+ ```python
96
+ from setfit import SetFitModel
97
+
98
+ # Download from the 🤗 Hub
99
+ model = SetFitModel.from_pretrained("mini1013/master_cate_ap2")
100
+ # Run inference
101
+ preds = model("(신세계김해점)오르시떼 여성 C221 나시아 긴소매 원피스 L 신세계백화점")
102
+ ```
103
+
104
+ <!--
105
+ ### Downstream Use
106
+
107
+ *List how someone could finetune this model on their own dataset.*
108
+ -->
109
+
110
+ <!--
111
+ ### Out-of-Scope Use
112
+
113
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
114
+ -->
115
+
116
+ <!--
117
+ ## Bias, Risks and Limitations
118
+
119
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
120
+ -->
121
+
122
+ <!--
123
+ ### Recommendations
124
+
125
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
126
+ -->
127
+
128
+ ## Training Details
129
+
130
+ ### Training Set Metrics
131
+ | Training set | Min | Median | Max |
132
+ |:-------------|:----|:-------|:----|
133
+ | Word count | 3 | 9.9869 | 22 |
134
+
135
+ | Label | Training Sample Count |
136
+ |:------|:----------------------|
137
+ | 0.0 | 50 |
138
+ | 1.0 | 50 |
139
+ | 2.0 | 50 |
140
+ | 3.0 | 50 |
141
+ | 4.0 | 50 |
142
+ | 5.0 | 7 |
143
+ | 6.0 | 50 |
144
+ | 7.0 | 50 |
145
+ | 8.0 | 50 |
146
+ | 9.0 | 50 |
147
+
148
+ ### Training Hyperparameters
149
+ - batch_size: (512, 512)
150
+ - num_epochs: (20, 20)
151
+ - max_steps: -1
152
+ - sampling_strategy: oversampling
153
+ - num_iterations: 40
154
+ - body_learning_rate: (2e-05, 2e-05)
155
+ - head_learning_rate: 2e-05
156
+ - loss: CosineSimilarityLoss
157
+ - distance_metric: cosine_distance
158
+ - margin: 0.25
159
+ - end_to_end: False
160
+ - use_amp: False
161
+ - warmup_proportion: 0.1
162
+ - seed: 42
163
+ - eval_max_steps: -1
164
+ - load_best_model_at_end: False
165
+
166
+ ### Training Results
167
+ | Epoch | Step | Training Loss | Validation Loss |
168
+ |:-------:|:----:|:-------------:|:---------------:|
169
+ | 0.0139 | 1 | 0.3999 | - |
170
+ | 0.6944 | 50 | 0.3239 | - |
171
+ | 1.3889 | 100 | 0.169 | - |
172
+ | 2.0833 | 150 | 0.033 | - |
173
+ | 2.7778 | 200 | 0.0122 | - |
174
+ | 3.4722 | 250 | 0.0022 | - |
175
+ | 4.1667 | 300 | 0.0008 | - |
176
+ | 4.8611 | 350 | 0.0006 | - |
177
+ | 5.5556 | 400 | 0.0004 | - |
178
+ | 6.25 | 450 | 0.0003 | - |
179
+ | 6.9444 | 500 | 0.0003 | - |
180
+ | 7.6389 | 550 | 0.0003 | - |
181
+ | 8.3333 | 600 | 0.0002 | - |
182
+ | 9.0278 | 650 | 0.0002 | - |
183
+ | 9.7222 | 700 | 0.0002 | - |
184
+ | 10.4167 | 750 | 0.0002 | - |
185
+ | 11.1111 | 800 | 0.0002 | - |
186
+ | 11.8056 | 850 | 0.0001 | - |
187
+ | 12.5 | 900 | 0.0001 | - |
188
+ | 13.1944 | 950 | 0.0001 | - |
189
+ | 13.8889 | 1000 | 0.0001 | - |
190
+ | 14.5833 | 1050 | 0.0001 | - |
191
+ | 15.2778 | 1100 | 0.0001 | - |
192
+ | 15.9722 | 1150 | 0.0001 | - |
193
+ | 16.6667 | 1200 | 0.0001 | - |
194
+ | 17.3611 | 1250 | 0.0001 | - |
195
+ | 18.0556 | 1300 | 0.0001 | - |
196
+ | 18.75 | 1350 | 0.0001 | - |
197
+ | 19.4444 | 1400 | 0.0001 | - |
198
+
199
+ ### Framework Versions
200
+ - Python: 3.10.12
201
+ - SetFit: 1.1.0.dev0
202
+ - Sentence Transformers: 3.1.1
203
+ - Transformers: 4.46.1
204
+ - PyTorch: 2.4.0+cu121
205
+ - Datasets: 2.20.0
206
+ - Tokenizers: 0.20.0
207
+
208
+ ## Citation
209
+
210
+ ### BibTeX
211
+ ```bibtex
212
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
213
+ doi = {10.48550/ARXIV.2209.11055},
214
+ url = {https://arxiv.org/abs/2209.11055},
215
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
216
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
217
+ title = {Efficient Few-Shot Learning Without Prompts},
218
+ publisher = {arXiv},
219
+ year = {2022},
220
+ copyright = {Creative Commons Attribution 4.0 International}
221
+ }
222
+ ```
223
+
224
+ <!--
225
+ ## Glossary
226
+
227
+ *Clearly define terms in order to be accessible across audiences.*
228
+ -->
229
+
230
+ <!--
231
+ ## Model Card Authors
232
+
233
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
234
+ -->
235
+
236
+ <!--
237
+ ## Model Card Contact
238
+
239
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
240
+ -->
config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "mini1013/master_item_ap",
3
+ "architectures": [
4
+ "RobertaModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "gradient_checkpointing": false,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 768,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "layer_norm_eps": 1e-05,
17
+ "max_position_embeddings": 514,
18
+ "model_type": "roberta",
19
+ "num_attention_heads": 12,
20
+ "num_hidden_layers": 12,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "tokenizer_class": "BertTokenizer",
24
+ "torch_dtype": "float32",
25
+ "transformers_version": "4.46.1",
26
+ "type_vocab_size": 1,
27
+ "use_cache": true,
28
+ "vocab_size": 32000
29
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.46.1",
5
+ "pytorch": "2.4.0+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
config_setfit.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "normalize_embeddings": false,
3
+ "labels": null
4
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a29098868cdc992b182d1d56b6466018c0860bec0ae350abb8dd42c347177cd3
3
+ size 442494816
model_head.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a856e262efaa0b7ffd3bd181465c16d44f93f7d3a3991ed219cf8f3cb3feec0
3
+ size 62407
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "[CLS]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "[SEP]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "[MASK]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "[PAD]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "[SEP]",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[CLS]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[PAD]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[SEP]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[UNK]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "[CLS]",
45
+ "clean_up_tokenization_spaces": false,
46
+ "cls_token": "[CLS]",
47
+ "do_basic_tokenize": true,
48
+ "do_lower_case": false,
49
+ "eos_token": "[SEP]",
50
+ "mask_token": "[MASK]",
51
+ "max_length": 512,
52
+ "model_max_length": 512,
53
+ "never_split": null,
54
+ "pad_to_multiple_of": null,
55
+ "pad_token": "[PAD]",
56
+ "pad_token_type_id": 0,
57
+ "padding_side": "right",
58
+ "sep_token": "[SEP]",
59
+ "stride": 0,
60
+ "strip_accents": null,
61
+ "tokenize_chinese_chars": true,
62
+ "tokenizer_class": "BertTokenizer",
63
+ "truncation_side": "right",
64
+ "truncation_strategy": "longest_first",
65
+ "unk_token": "[UNK]"
66
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff