---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: The ban, which went into effect in March 2019, was embraced by Trump following
a massacre that killed 58 people at a music festival in Las Vegas in which the
gunman used bump stocks.
- text: 'Now Modi has made international headlines for yet another similarity: He’s
constructing a massive wall … but unlike Trump’s goal of keeping immigrants out,
Modi’s wall was built to hide the country’s poverty from the gold-plated American
president.'
- text: 'Though banks have fled many low-income communities, there’s a post office
for almost every ZIP code in the country. '
- text: The administration has stonewalled Congress during the impeachment proceedings
and other investigations, but the American public overwhelmingly wants the Trump
administration to comply with lawmakers.
- text: The gun lobby has repeatedly claimed that using a gun in self-defense is a
common event, often going so far as to allege that Americans defend themselves
with guns millions of times a year.
pipeline_tag: text-classification
inference: true
base_model: BAAI/bge-small-en-v1.5
model-index:
- name: SetFit with BAAI/bge-small-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.67003367003367
name: Accuracy
---
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| center |
- 'A leading economist who vouched for Democratic presidential candidate Elizabeth Warren’s healthcare reform plan told Reuters on Thursday he doubts its staggering cost can be fully covered alongside her other government programs.'
- 'Labour leader Jeremy Corbyn unveiled his party’s election manifesto on Thursday, setting out radical plans to transform Britain with public sector pay rises, higher taxes on companies and a sweeping nationalisation of infrastructure.'
- 'Instagram will start blocking any hashtags spreading misinformation about vaccines, becoming the latest internet platform to crack down on bad health information.'
|
| right | - 'Sanders praises the radical Green New Deal, champions a Medicare for All plan with a $34 trillion price tag, nods to abortion as a means of population control, and defends bread lines and Fidel Castro’s Cuba. '
- 'Since when did even conservative publications consider that it’s the right and moral thing to do to provide covering fire for an increasingly thuggish, openly hard-left, and borderline terroristic group which is less obviously to do with ‘racism’, but which has almost everything to do with smashing Western civilisation?'
- 'Local health officer\xa0Dr Rosana Salvaterra appeared to co-sign the demonstration,\xa0praising activists\xa0for wearing masks and claiming they obeyed social distancing protocols — although\xa0footage\xa0of the event strongly suggests that is\xa0not strictly accurate.'
|
| left | - 'Activists planning to line California roadways with anti-vaccination billboards full of misinformation are paying for them through Facebook fundraisers, despite a platform-wide crackdown on such campaigns.'
- 'On Monday, as\xa0Common Dreams\xa0reported, Trump threatened to deploy federal forces to Chicago, Philadelphia, Detroit, Baltimore, and Oakland to confront Black Lives Matter protesters.'
- "When the nation's highest civilian honor went to a right-wing media personality, it served as an oddly appropriate capstone to Trump's broader goals."
|
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.6700 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("JordanTallon/Unifeed")
# Run inference
preds = model("Though banks have fled many low-income communities, there’s a post office for almost every ZIP code in the country. ")
```
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 33.0139 | 195 |
| Label | Training Sample Count |
|:-------|:----------------------|
| center | 782 |
| left | 780 |
| right | 813 |
### Training Hyperparameters
- batch_size: (64, 64)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0007 | 1 | 0.2531 | - |
| 0.0337 | 50 | 0.253 | - |
| 0.0673 | 100 | 0.2491 | - |
| 0.1010 | 150 | 0.2592 | - |
| 0.1347 | 200 | 0.2476 | - |
| 0.1684 | 250 | 0.2282 | - |
| 0.2020 | 300 | 0.2222 | - |
| 0.2357 | 350 | 0.2196 | - |
| 0.2694 | 400 | 0.2199 | - |
| 0.3030 | 450 | 0.1821 | - |
| 0.3367 | 500 | 0.1819 | - |
| 0.3704 | 550 | 0.1327 | - |
| 0.4040 | 600 | 0.1193 | - |
| 0.4377 | 650 | 0.1652 | - |
| 0.4714 | 700 | 0.1059 | - |
| 0.5051 | 750 | 0.1141 | - |
| 0.5387 | 800 | 0.1103 | - |
| 0.5724 | 850 | 0.1138 | - |
| 0.6061 | 900 | 0.0894 | - |
| 0.6397 | 950 | 0.1138 | - |
| 0.6734 | 1000 | 0.11 | - |
| 0.7071 | 1050 | 0.1091 | - |
| 0.7407 | 1100 | 0.0804 | - |
| 0.7744 | 1150 | 0.1161 | - |
| 0.8081 | 1200 | 0.0715 | - |
| 0.8418 | 1250 | 0.1 | - |
| 0.8754 | 1300 | 0.0687 | - |
| 0.9091 | 1350 | 0.0488 | - |
| 0.9428 | 1400 | 0.0354 | - |
| 0.9764 | 1450 | 0.0244 | - |
| 1.0101 | 1500 | 0.02 | - |
| 1.0438 | 1550 | 0.0179 | - |
| 1.0774 | 1600 | 0.0219 | - |
| 1.1111 | 1650 | 0.0056 | - |
| 1.1448 | 1700 | 0.0169 | - |
| 1.1785 | 1750 | 0.0038 | - |
| 1.2121 | 1800 | 0.0139 | - |
| 1.2458 | 1850 | 0.0154 | - |
| 1.2795 | 1900 | 0.0118 | - |
| 1.3131 | 1950 | 0.0019 | - |
| 1.3468 | 2000 | 0.0016 | - |
| 1.3805 | 2050 | 0.0019 | - |
| 1.4141 | 2100 | 0.0016 | - |
| 1.4478 | 2150 | 0.0017 | - |
| 1.4815 | 2200 | 0.0011 | - |
| 1.5152 | 2250 | 0.0013 | - |
| 1.5488 | 2300 | 0.0123 | - |
| 1.5825 | 2350 | 0.0014 | - |
| 1.6162 | 2400 | 0.0013 | - |
| 1.6498 | 2450 | 0.001 | - |
| 1.6835 | 2500 | 0.0042 | - |
| 1.7172 | 2550 | 0.0017 | - |
| 1.7508 | 2600 | 0.0027 | - |
| 1.7845 | 2650 | 0.0016 | - |
| 1.8182 | 2700 | 0.0011 | - |
| 1.8519 | 2750 | 0.0014 | - |
| 1.8855 | 2800 | 0.0012 | - |
| 1.9192 | 2850 | 0.0012 | - |
| 1.9529 | 2900 | 0.0009 | - |
| 1.9865 | 2950 | 0.001 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```