metadata
base_model: NickyNicky/Phi-3-mini-128k-instruct_function
inference: false
model_creator: NickyNicky
model_name: Phi-3-mini-128k-instruct_function
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
NickyNicky/Phi-3-mini-128k-instruct_function-GGUF
Quantized GGUF model files for Phi-3-mini-128k-instruct_function from NickyNicky
Name | Quant method | Size |
---|---|---|
phi-3-mini-128k-instruct_function.fp16.gguf | fp16 | 7.64 GB |
phi-3-mini-128k-instruct_function.q2_k.gguf | q2_k | 1.42 GB |
phi-3-mini-128k-instruct_function.q3_k_m.gguf | q3_k_m | 1.96 GB |
phi-3-mini-128k-instruct_function.q4_k_m.gguf | q4_k_m | 2.39 GB |
phi-3-mini-128k-instruct_function.q5_k_m.gguf | q5_k_m | 2.82 GB |
phi-3-mini-128k-instruct_function.q6_k.gguf | q6_k | 3.14 GB |
phi-3-mini-128k-instruct_function.q8_0.gguf | q8_0 | 4.06 GB |