Triangle104/SmolLM2-135M-Instruct-Q5_K_S-GGUF
This model was converted to GGUF format from HuggingFaceTB/SmolLM2-135M-Instruct
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Model details:
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 135M model was trained on 2 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using UltraFeedback.
The instruct model additionally supports tasks such as text rewriting, summarization and function calling (for the 1.7B) thanks to datasets developed by Argilla such as Synth-APIGen-v0.1. You can find the SFT dataset here: https://huggingface.co/datasets/HuggingFaceTB/smol-smoltalk and finetuning code at https://github.com/huggingface/alignment-handbook/tree/main/recipes/smollm2 How to use Transformers
pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceTB/SmolLM2-135M-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint)
for multiple GPUs install accelerate and do model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "What is gravity?"}] input_text=tokenizer.apply_chat_template(messages, tokenize=False) print(input_text) inputs = tokenizer.encode(input_text, return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True) print(tokenizer.decode(outputs[0]))
Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
pip install trl trl chat --model_name_or_path HuggingFaceTB/SmolLM2-135M-Instruct --device cpu
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo Triangle104/SmolLM2-135M-Instruct-Q5_K_S-GGUF --hf-file smollm2-135m-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo Triangle104/SmolLM2-135M-Instruct-Q5_K_S-GGUF --hf-file smollm2-135m-instruct-q5_k_s.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1
flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo Triangle104/SmolLM2-135M-Instruct-Q5_K_S-GGUF --hf-file smollm2-135m-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo Triangle104/SmolLM2-135M-Instruct-Q5_K_S-GGUF --hf-file smollm2-135m-instruct-q5_k_s.gguf -c 2048
- Downloads last month
- 22
Model tree for Triangle104/SmolLM2-135M-Instruct-Q5_K_S-GGUF
Base model
HuggingFaceTB/SmolLM2-135M