Sentiment Finetuned Phi-3

This model is a fine-tuned version of the unsloth/phi-3.5-mini-instruct-bnb-4bit using a custom sentiment analysis dataset. It was trained with accelerated speed using Unsloth and Hugging Face's TRL library.

Model Details

This model is a fine-tuned version of the unsloth/phi-3.5-mini-instruct-bnb-4bit optimized for sentiment analysis tasks. The original Phi-3 model is a powerful language model, and this fine-tuned version further enhances its capabilities for tasks involving sentiment detection, classification and inference.

Intended Use:

This model is intended for use in tasks such as:

  • Sentiment Analysis: Classifying the sentiment of text as positive, negative, or neutral.
  • Customer Feedback Analysis: Analyzing reviews and feedback for sentiment.
  • Social Media Monitoring: Detecting the sentiment of posts and comments.
  • Text Classification: General text classification involving sentiment labels.
  • Opinion Mining: Understanding the sentiment within text data.

Training Details:

  • Fine-tuning Dataset: A custom sentiment dataset was used for fine-tuning. (Optional: If your dataset is public, you can include a link or a brief description here.)
  • Training Method: The model was fine-tuned using Unsloth and Hugging Face's TRL library, which provides optimized training for models.
  • Hardware: The model was trained using [Specify your hardware if you want].
  • Accelerated Training Using unsloth led to 2x faster training.

Model Evaluation:

  • (Optional) Provide links to evaluation metrics or example outputs if you have them available. You can include metrics like:
    • Accuracy
    • Precision, Recall and F1 scores
    • Qualitative analysis of the outputs

Limitations:

  • The model's performance may vary on datasets significantly different from the training data.
  • It may struggle with sarcasm or nuanced expressions of sentiment.
  • The model is optimized for sentiment analysis tasks, it is not suitable as a generic language model.

Further Information:

  • If you have a repository where you keep your training code, datasets, or other relevant information, you can link it here.

Acknowledgements:

  • Unsloth for the optimized training library.
  • Hugging Face for the TRL library and model hosting.
  • [Optional] If you have used a specific dataset, give credit to the original creators.
Downloads last month
8
GGUF
Model size
3.82B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.