File size: 815 Bytes
fea7493 20511e2 fea7493 8fcba32 d7795c7 8fcba32 fea7493 a7405ee fea7493 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
language:
- en
metrics:
- accuracy
library_name: transformers
base_model: OEvortex/HelpingAI-Lite
tags:
- HelpingAI
- coder
- lite
- Fine-tuned
- moe
- nlp
license: other
license_name: hsul
license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md
---
# HelpingAI-Lite
# Subscribe to my YouTube channel
[Subscribe](https://youtube.com/@OEvortex)
The HelpingAI-Lite-2x1B is a MOE (Mixture of Experts) model, surpassing HelpingAI-Lite in accuracy. However, it operates at a marginally reduced speed compared to the efficiency of HelpingAI-Lite. This nuanced trade-off positions the HelpingAI-Lite-2x1B as an exemplary choice for those who prioritize heightened accuracy within a context that allows for a slightly extended processing time.
## Language
The model supports English language.
|