|
|
|
--- |
|
base_model: Agnuxo/Qwen2_0.5B |
|
language: ['en', 'es'] |
|
license: apache-2.0 |
|
tags: ['text-generation-inference', 'transformers', 'unsloth', 'mistral', 'gguf'] |
|
datasets: ['iamtarun/python_code_instructions_18k_alpaca', 'jtatman/python-code-dataset-500k', 'flytech/python-codes-25k', 'Vezora/Tested-143k-Python-Alpaca', 'codefuse-ai/CodeExercise-Python-27k', 'Vezora/Tested-22k-Python-Alpaca', 'mlabonne/Evol-Instruct-Python-26k'] |
|
library_name: adapter-transformers |
|
metrics: |
|
- accuracy |
|
- bertscore |
|
--- |
|
|
|
# Uploaded model |
|
|
|
[<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" width="100"/><img src="https://github.githubassets.com/assets/GitHub-Logo-ee398b662d42.png" width="100"/>](https://github.com/Agnuxo1) |
|
- **Developed by:** Agnuxo(https://github.com/Agnuxo1) |
|
- **License:** apache-2.0 |
|
- **Finetuned from model :** Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal |
|
|
|
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
|
|
## Benchmark Results |
|
|
|
This model has been fine-tuned for various tasks and evaluated on the following benchmarks: |
|
|
|
### accuracy |
|
**Accuracy:** Not Available |
|
|
|
![accuracy Accuracy](./accuracy_accuracy.png) |
|
|
|
### bertscore |
|
**Bertscore:** Not Available |
|
|
|
![bertscore Bertscore](./bertscore_bertscore.png) |
|
|
|
|
|
Model Size: 494,032,768 parameters |
|
Required Memory: 1.84 GB |
|
|
|
For more details, visit my [GitHub](https://github.com/Agnuxo1). |
|
|
|
Thanks for your interest in this model! |
|
|