vicuna-160m / README.md
double7's picture
Update README.md
ac8867c verified
|
raw
history blame
660 Bytes
metadata
license: apache-2.0
datasets:
  - anon8231489123/ShareGPT_Vicuna_unfiltered
language:
  - en
pipeline_tag: text-generation

Model description

This is a Vicuna-like model with only 160M parameters, which is fine-tuned from LLaMA-160m on ShareGPT data.

The training setup follows the Vicuna suite.

The model is mainly developed as a base Small Speculative Model. As a comparison, it can be better aligned to the Vicuna models than LLaMA-160m with little loss of alignment to the LLaMA models.