Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
TheBloke
/
Phind-CodeLlama-34B-v2-AWQ
like
32
Text Generation
Transformers
Safetensors
llama
code llama
Eval Results
text-generation-inference
4-bit precision
awq
License:
llama2
Model card
Files
Files and versions
Community
3
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (2)
torch.bfloat16 is not supported for quantization method awq
5
#2 opened about 1 year ago by
Pizzarino