Moritz-Pfeifer's picture
Update README.md
8f36530
|
raw
history blame
1.86 kB
metadata
license: mit
widget:
  - text: >-
      The early effects of our policy tightening are also becoming visible,
      especially in sectors like manufacturing and construction that are more
      sensitive to interest rate changes.

CentralBankRoBERTa

CentralBankRoBERTA is a large language model. It combines an economic agent classifier that distinguishes five basic macroeconomic agents with a binary sentiment classifier that identifies the emotional content of sentences in central bank communications.

Overview

The AudienceClassifier model is designed to classify the target audience of a given text. It can determine whether the text is adressing households, firms, the financial sector, the government or the central bank itself. This model is based on the RoBERTa architecture and has been fine-tuned on a diverse and extensive dataset to provide accurate predictions.

Intended Use

The AudienceClassifier model is intended to be used for the analysis of central bank communications where content categorization based on target audiences is essential.

Performance

  • Accuracy: 93%
  • F1 Score: 0.93
  • Precision: 0.93
  • Recall: 0.93

Usage

You can use these models in your own applications by leveraging the Hugging Face Transformers library. Below is a Python code snippet demonstrating how to load and use the AudienceClassifier model:

from transformers import pipeline

# Load the AudienceClassifier model
audience_classifier = pipeline("text-classification", model="Moritz-Pfeifer/CentralBankRoBERTa-audience-classifier")

# Perform audience classification
audience_result = audience_classifier("We used our liquidity tools to make funding available to banks that might need it.")
print("Audience Classification:", audience_result[0]['label'])