|
---
|
|
license: apache-2.0
|
|
language:
|
|
- en
|
|
- de
|
|
- es
|
|
- fr
|
|
- it
|
|
- pt
|
|
- pl
|
|
- nl
|
|
- tr
|
|
- sv
|
|
- cs
|
|
- el
|
|
- hu
|
|
- ro
|
|
- fi
|
|
- uk
|
|
- sl
|
|
- sk
|
|
- da
|
|
- lt
|
|
- lv
|
|
- et
|
|
- bg
|
|
- 'no'
|
|
- ca
|
|
- hr
|
|
- ga
|
|
- mt
|
|
- gl
|
|
- zh
|
|
- ru
|
|
- ko
|
|
- ja
|
|
- ar
|
|
- hi
|
|
base_model: NousResearch/Hermes-3-Llama-3.1-8B
|
|
base_model_relation: quantized
|
|
library_name: mlc-llm
|
|
pipeline_tag: text-generation
|
|
---
|
|
|
|
4-bit [GPTQ](https://arxiv.org/abs/2210.17323) quantized version of [utter-project/EuroLLM-9B-Instruct](https://huggingface.co/utter-project/EuroLLM-9B-Instruct) for inference with the [Private LLM](https://privatellm.app/) app.
|
|
|