Query Expansion GGUF - based on Llama-3.2-3B

GGUF quantized version of Llama-3.2-3B for query expansion task. Part of a collection of query expansion models available in different architectures and sizes.

Overview

Task: Search query expansion
Base model: Llama-3.2-3B-Instruct
Training data: Query Expansion Dataset

Query Expansion Model

Quantized Versions

Model available in multiple quantization formats:

  • F16 (Original size)
  • Q8_0 (~8-bit quantization)
  • Q5_K_M (~5-bit quantization)
  • Q4_K_M (~4-bit quantization)
  • Q3_K_M (~3-bit quantization)

Related Models

Fine-tuned models

GGUF Variants

Details

This model is designed for enhancing search and retrieval systems by generating semantically relevant query expansions.

It could be useful for:

  • Advanced RAG systems
  • Search enhancement
  • Query preprocessing
  • Low-latency query expansion

Example

Input: "apple stock" Expansions:

  • "apple market"
  • "apple news"
  • "apple stock price"
  • "apple stock forecast"

Citation

If you find my work helpful, feel free to give me a citation.


Downloads last month
0
GGUF
Model size
3.21B params
Architecture
llama

3-bit

4-bit

5-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train s-emanuilov/query-expansion-Llama-3.2-3B-GGUF

Collection including s-emanuilov/query-expansion-Llama-3.2-3B-GGUF