zamal_

zamal

AI & ML interests

Anything that makes our life easy.

Organizations

FAU Erlangen-Nรผrnberg's profile picture Open-Source AI Meetup's profile picture lora concepts library's profile picture OpenSky's profile picture That Time I got Reincarnated as a Hugging Face Organization's profile picture ZeroGPU Explorers's profile picture LocalLLaMA's profile picture MLX Community's profile picture Social Post Explorers's profile picture Paris AI Running Club's profile picture Hugging Face Party @ PyTorch Conference's profile picture

zamal's activity

reacted to their post with ๐Ÿ”ฅ 3 months ago
view post
Post
1752
๐Ÿš€ Announcement for the Lovely community! ๐Ÿš€

Just launched the zamal/DeepSeek-VL-1.3B-Chat on Hugging Face, and it's ready for YOU to explore! ๐Ÿ’ฌ๐Ÿ–ผ๏ธ

This full-fledged model is perfect for advanced image and text interactions, with zero GPU required. The Deepseek VL-1.3B Chat typically needs around 8 GB of VRAM and storage of almost 4 GB, but now you can experience it hassle-free right on our space!

Want something lighter? Weโ€™ve also uploaded a 4 bit quantized version (just around 1GB!), available on my profile. Perfect for those with limited hardware. ๐ŸŒ๐Ÿ”

Come try it now and see what this model can do! ๐Ÿš€โœจ

posted an update 3 months ago
view post
Post
1752
๐Ÿš€ Announcement for the Lovely community! ๐Ÿš€

Just launched the zamal/DeepSeek-VL-1.3B-Chat on Hugging Face, and it's ready for YOU to explore! ๐Ÿ’ฌ๐Ÿ–ผ๏ธ

This full-fledged model is perfect for advanced image and text interactions, with zero GPU required. The Deepseek VL-1.3B Chat typically needs around 8 GB of VRAM and storage of almost 4 GB, but now you can experience it hassle-free right on our space!

Want something lighter? Weโ€™ve also uploaded a 4 bit quantized version (just around 1GB!), available on my profile. Perfect for those with limited hardware. ๐ŸŒ๐Ÿ”

Come try it now and see what this model can do! ๐Ÿš€โœจ

reacted to their post with ๐Ÿ”ฅ 3 months ago
view post
Post
2056
Hello, lovely community! ๐ŸŒŸ

zamal/Molmo-4bit Thrilled to announce that the Molmo 7B 4-bit Space is now live! ๐Ÿš€ The model size has been reduced by six times with almost no performance loss, and the results will leave you amazed!

It runs on zero GPU, making it incredibly accessible for everyone!

Check it out here and start exploring today!

Happy experimenting! ๐ŸŽ‰
posted an update 3 months ago
view post
Post
2056
Hello, lovely community! ๐ŸŒŸ

zamal/Molmo-4bit Thrilled to announce that the Molmo 7B 4-bit Space is now live! ๐Ÿš€ The model size has been reduced by six times with almost no performance loss, and the results will leave you amazed!

It runs on zero GPU, making it incredibly accessible for everyone!

Check it out here and start exploring today!

Happy experimenting! ๐ŸŽ‰
posted an update 3 months ago
view post
Post
1947
๐Ÿš€ New Model Release: zamal/Molmo-7B-GPTQ-4bit ๐Ÿš€

Hello lovely community,

zamal/Molmo-7B-GPTQ-4bit model is now available for all! This model has been highly quantized, reducing its size by almost six times. It now occupies significantly less space and vRAM, making it perfect for deployment on resource-constrained devices without compromising performance.

Now we get:
Efficient Performance: Maintains high accuracy while being highly quantized.
Reduced Size: The model size is reduced by nearly six times, optimizing storage and memory usage.
Versatile Application: Ideal for integrating a powerful visual language model into various projects particularly multi rag chains.
Check it out!

  • 1 reply
ยท
reacted to their post with ๐Ÿ”ฅ๐Ÿ‘ 8 months ago
view post
Post
1324
Finally!
My first post for the lovely community out there!

Here's a highly quantized finetuned version of gemma focused exclusively on Prompt Engineering. Write as ambiguous you want and leave the job to this model

zamal/gemma-7b-finetuned
posted an update 8 months ago
view post
Post
1324
Finally!
My first post for the lovely community out there!

Here's a highly quantized finetuned version of gemma focused exclusively on Prompt Engineering. Write as ambiguous you want and leave the job to this model

zamal/gemma-7b-finetuned