---
language: en
license: apache-2.0
library_name: setfit
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- tomaarsen/setfit-absa-semeval-restaurants
metrics:
- accuracy
widget:
- text: bottles of wine:bottles of wine are cheap and good.
- text: world:I also ordered the Change Mojito, which was out of this world.
- text: bar:We were still sitting at the bar while we drank the sangria, but facing
    away from the bar when we turned back around, the $2 was gone the people next
    to us said the bartender took it.
- text: word:word of advice, save room for pasta dishes and never leave until you've
    had the tiramisu.
- text: bartender:We were still sitting at the bar while we drank the sangria, but
    facing away from the bar when we turned back around, the $2 was gone the people
    next to us said the bartender took it.
pipeline_tag: text-classification
inference: false
co2_eq_emissions:
  emissions: 18.322516829847984
  source: codecarbon
  training_type: fine-tuning
  on_cloud: false
  cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
  ram_total_size: 31.777088165283203
  hours_used: 0.303
  hardware_used: 1 x NVIDIA GeForce RTX 3090
base_model: BAAI/bge-small-en-v1.5
model-index:
- name: SetFit Aspect Model with BAAI/bge-small-en-v1.5 on SemEval 2014 Task 4 (Restaurants)
  results:
  - task:
      type: text-classification
      name: Text Classification
    dataset:
      name: SemEval 2014 Task 4 (Restaurants)
      type: tomaarsen/setfit-absa-semeval-restaurants
      split: test
    metrics:
    - type: accuracy
      value: 0.8623188405797102
      name: Accuracy
---

# SetFit Aspect Model with BAAI/bge-small-en-v1.5 on SemEval 2014 Task 4 (Restaurants)

This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [SemEval 2014 Task 4 (Restaurants)](https://huggingface.co/datasets/tomaarsen/setfit-absa-semeval-restaurants) dataset that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.

The model has been trained using an efficient few-shot learning technique that involves:

1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.

This model was trained within the context of a larger system for ABSA, which looks like so:

1. Use a spaCy model to select possible aspect span candidates.
2. **Use this SetFit model to filter these possible aspect span candidates.**
3. Use a SetFit model to classify the filtered aspect span candidates.

## Model Details

### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_lg
- **SetFitABSA Aspect Model:** [tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-aspect](https://huggingface.co/tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-aspect)
- **SetFitABSA Polarity Model:** [tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-polarity](https://huggingface.co/tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-polarity)
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
- **Training Dataset:** [SemEval 2014 Task 4 (Restaurants)](https://huggingface.co/datasets/tomaarsen/setfit-absa-semeval-restaurants)
- **Language:** en
- **License:** apache-2.0

### Model Sources

- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)

### Model Labels
| Label     | Examples                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
|:----------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| aspect    | <ul><li>'staff:But the staff was so horrible to us.'</li><li>"food:To be completely fair, the only redeeming factor was the food, which was above average, but couldn't make up for all the other deficiencies of Teodora."</li><li>"food:The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not."</li></ul>                                                                                                                              |
| no aspect | <ul><li>"factor:To be completely fair, the only redeeming factor was the food, which was above average, but couldn't make up for all the other deficiencies of Teodora."</li><li>"deficiencies:To be completely fair, the only redeeming factor was the food, which was above average, but couldn't make up for all the other deficiencies of Teodora."</li><li>"Teodora:To be completely fair, the only redeeming factor was the food, which was above average, but couldn't make up for all the other deficiencies of Teodora."</li></ul> |

## Evaluation

### Metrics
| Label   | Accuracy |
|:--------|:---------|
| **all** | 0.8623   |

## Uses

### Direct Use for Inference

First install the SetFit library:

```bash
pip install setfit
```

Then you can load this model and run inference.

```python
from setfit import AbsaModel

# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
    "tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-aspect",
    "tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```

<!--
### Downstream Use

*List how someone could finetune this model on their own dataset.*
-->

<!--
### Out-of-Scope Use

*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->

<!--
## Bias, Risks and Limitations

*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->

<!--
### Recommendations

*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->

## Training Details

### Training Set Metrics
| Training set | Min | Median  | Max |
|:-------------|:----|:--------|:----|
| Word count   | 4   | 19.3576 | 45  |

| Label     | Training Sample Count |
|:----------|:----------------------|
| no aspect | 170                   |
| aspect    | 255                   |

### Training Hyperparameters
- batch_size: (256, 256)
- num_epochs: (5, 5)
- max_steps: 5000
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True

### Training Results
| Epoch      | Step    | Training Loss | Validation Loss |
|:----------:|:-------:|:-------------:|:---------------:|
| 0.0027     | 1       | 0.2498        | -               |
| 0.1355     | 50      | 0.2442        | -               |
| 0.2710     | 100     | 0.2462        | 0.2496          |
| 0.4065     | 150     | 0.2282        | -               |
| 0.5420     | 200     | 0.0752        | 0.1686          |
| 0.6775     | 250     | 0.0124        | -               |
| 0.8130     | 300     | 0.0128        | 0.1884          |
| 0.9485     | 350     | 0.0062        | -               |
| 1.0840     | 400     | 0.0012        | 0.183           |
| 1.2195     | 450     | 0.0009        | -               |
| 1.3550     | 500     | 0.0008        | 0.2072          |
| 1.4905     | 550     | 0.0031        | -               |
| 1.6260     | 600     | 0.0006        | 0.1716          |
| 1.7615     | 650     | 0.0005        | -               |
| **1.8970** | **700** | **0.0005**    | **0.1666**      |
| 2.0325     | 750     | 0.0005        | -               |
| 2.1680     | 800     | 0.0004        | 0.2086          |
| 2.3035     | 850     | 0.0005        | -               |
| 2.4390     | 900     | 0.0004        | 0.183           |
| 2.5745     | 950     | 0.0004        | -               |
| 2.7100     | 1000    | 0.0036        | 0.1725          |
| 2.8455     | 1050    | 0.0004        | -               |
| 2.9810     | 1100    | 0.0003        | 0.1816          |
| 3.1165     | 1150    | 0.0004        | -               |
| 3.2520     | 1200    | 0.0003        | 0.1802          |

* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.018 kg of CO2
- **Hours Used**: 0.303 hours

### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB

### Framework Versions
- Python: 3.9.16
- SetFit: 1.0.0.dev0
- Sentence Transformers: 2.2.2
- spaCy: 3.7.2
- Transformers: 4.29.0
- PyTorch: 1.13.1+cu117
- Datasets: 2.15.0
- Tokenizers: 0.13.3

## Citation

### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
```

<!--
## Glossary

*Clearly define terms in order to be accessible across audiences.*
-->

<!--
## Model Card Authors

*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->

<!--
## Model Card Contact

*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->