Model Card for Model ID

Model fine tuned with LoRA on an Amharic Corpus of data collected from public telegram channels and groups.

Model Details

Model Description

  • Developed by: [Biniyam Ajaw, Elias Assamnew]
  • Funded by: [10 Academy]
  • Shared by [optional]: [Biniyam Ajaw]
  • Model type: [Text Generation]
  • Language(s) (NLP): [Amharic - English]
  • License: [MIT]
  • Finetuned from model [optional]: [NousResearch-Llama2-7B-hf]

Uses

The model is still in development and significantly lacks training data so it might not generate contents the way you want it to.

Downstream Use [optional]

You can fine tune this model on labeled data for a specific domain. To get more pleasing results.

Bias, Risks, and Limitations

The model is highly biased towards generating news content. The model might repeat specific words because it is trained on a cleaned but unfiltered data because of the lack of tokens.

Recommendations

The model is better of if you train it on labeled data if you want it to generate a content.

  • PEFT 0.7.2.dev0
Downloads last month
7
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for BiniyamAjaw/llama-2-7b-finetuned-adapters

Adapter
(125)
this model

Dataset used to train BiniyamAjaw/llama-2-7b-finetuned-adapters