Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Felladrin
/
gguf-vicuna-160m
like
0
GGUF
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
GGUF version of
double7/vicuna-160m
.
Downloads last month
10
GGUF
Model size
162M params
Architecture
llama
2-bit
Q2_K
3-bit
Q3_K_S
Q3_K_M
4-bit
Q4_0
Q4_K_M
5-bit
Q5_K_M
6-bit
Q6_K
8-bit
Q8_0
16-bit
F16
Inference API
Unable to determine this model's library. Check the
docs
.
Model tree for
Felladrin/gguf-vicuna-160m
Base model
double7/vicuna-160m
Quantized
(
2
)
this model