tags: | |
- llamafile | |
- GGUF | |
base_model: PrunaAI/WizardLM-2-7B-GGUF-smashed | |
## WizardLM-2-7B-Q4_0-llamafile-NonAVX | |
llamafile lets you distribute and run LLMs with a single file. [announcement blog post](https://hacks.mozilla.org/2023/11/introducing-llamafile/) | |
#### Downloads | |
- [microsoft_WizardLM-2-7B.Q4_0.llamafile](https://huggingface.co/blueprintninja/WizardLM-2-7B-Q4_0-llamafile-NonAVX/resolve/main/microsoft_WizardLM-2-7B.Q4_0.llamafile) | |
This repository was created using the [llamafile-builder](https://github.com/rabilrbl/llamafile-builder) | |