YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

vicuna-160m - bnb 8bits

Original model description:

license: apache-2.0 datasets: - anon8231489123/ShareGPT_Vicuna_unfiltered language: - en pipeline_tag: text-generation

Model description

This is a Vicuna-like model with only 160M parameters, which is fine-tuned from LLaMA-160m on ShareGPT data.

The training setup follows the Vicuna suite.

The model is mainly developed as a base Small Speculative Model in MCSD paper. As a comparison, it can be better aligned to the Vicuna models than LLaMA-160m with little loss of alignment to the LLaMA models.

Draft Model Target Model Alignment
LLaMA-68/160M LLaMA-13/33B 馃槂
LLaMA-68/160M Vicuna-13/33B 馃槦
Vicuna-68/160M LLaMA-13/33B 馃槂
Vicuna-68/160M Vicuna-13/33B 馃槂
Downloads last month
1
Safetensors
Model size
163M params
Tensor type
F32
FP16
I8
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.