--- license: other base_model: - TheDrummer/Hubble-4B-v1 library_name: transformers quantized_by: Ex_y base_model_relation: quantized --- EXL2 quants of [TheDrummer/Hubble-4B-v1](https://huggingface.co/TheDrummer/Hubble-4B-v1) Default parameter. 6.5bpw and 8.0 bpw uses 8 bit lm_head layer, while 4.25bpw and 5.0bpw uses 6 bit lm_head layer. # Join our Discord! https://discord.gg/Nbv9pQ88Xb ### Works on [Kobold 1.74](https://github.com/LostRuins/koboldcpp/releases/tag/v1.74)! *([Layla (iOS / Android)](https://www.layla-network.ai/) support is in progress)* --- [BeaverAI](https://huggingface.co/BeaverAI) proudly presents... # Hubble 4B v1 *Equipped with his five senses, man explores the universe around him and calls the adventure 'Science'.* ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/R8_o3CCpTgKv5Wnnry7E_.png) ## Description This is a finetune of Nvidia's Llama 3.1 4B Minitron - a shrunk down model of Llama 3.1 8B 128K. ### Usage - ChatML or Text Completion - Add `<|im_end|>` as a stop token ### Links - Original: https://huggingface.co/TheDrummer/Hubble-4B-v1 - GGUF: https://huggingface.co/TheDrummer/Hubble-4B-v1-GGUF - Chadquants: https://huggingface.co/bartowski/Hubble-4B-v1-GGUF ### Technical Note Hubble was trained on ChatML with `<|end_of_text|>` as the EOS token. If you encounter any issues with the model, please let me know!