--- license: mit license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE language: - en pipeline_tag: text-generation tags: - phi - nlp - math - code - chat - conversational - llama-cpp - gguf-my-repo inference: parameters: temperature: 0 widget: - messages: - role: user content: How should I explain the Internet? library_name: transformers base_model: microsoft/phi-4 --- # Triangle104/phi-4-Q4_K_M-GGUF This model was converted to GGUF format from [`microsoft/phi-4`](https://huggingface.co/microsoft/phi-4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/phi-4) for more details on the model. --- Model details: - Developers - Microsoft Research Description - phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning. phi-4 underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures Architecture - 14B parameters, dense decoder-only Transformer model Inputs - Text, best suited for prompts in the chat format Context length - 16K tokens GPUs - 1920 H100-80G Training time - 21 days Training data - 9.8T tokens Outputs - Generated text in response to input Dates - October 2024 – November 2024 Status - Static model trained on an offline dataset with cutoff dates of June 2024 and earlier for publicly available data Release date - December 12, 2024 License - MIT --- ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/phi-4-Q4_K_M-GGUF --hf-file phi-4-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/phi-4-Q4_K_M-GGUF --hf-file phi-4-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/phi-4-Q4_K_M-GGUF --hf-file phi-4-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/phi-4-Q4_K_M-GGUF --hf-file phi-4-q4_k_m.gguf -c 2048 ```