From Seikaijyu/RWKV6-7B-v3-porn-chat: https://huggingface.co/Seikaijyu/RWKV6-7B-v3-porn-chat
Based on my experience, Q4_K_S and Q4_K_M are usually the balance points between model size, quantization, and speed.
In some benchmarks, selecting a large-parameter low-quantization LLM tends to perform better than a small-parameter high-quantization LLM.
根据我的经验,通常Q4_K_S、Q4_K_M是模型尺寸/量化/速度的平衡点
在某些基准测试中,选择大参数低量化模型往往比选择小参数高量化模型表现更好。
- Downloads last month
- 0
Model tree for btaskel/RWKV6-7B-v3-porn-chat-GGUF
Base model
Seikaijyu/RWKV6-7B-v3-porn-chat