π
Mistral-7B-OpenOrca
Open-Orca/Mistral-7B-OpenOrca
Text Generation
β’
Updated
β’
19.8k
β’
677
Note
Mistral model fine-tuned on the OpenOrca dataset
teknium/CollectiveCognition-v1.1-Mistral-7B
Text Generation
β’
Updated
β’
28
β’
78
Note
Another Mistral fine-tune with great results in TruthfulQA
stabilityai/stablelm-3b-4e1t
Text Generation
β’
Updated
β’
8.55k
β’
309
Note
Very high performant model by Stability. WIth just 3B params, it achieves some great results
Efficient Streaming Language Models with Attention Sinks
Paper
β’
2309.17453
β’
Published
β’
13
Note
Check out this amazing blog post explaining this https://huggingface.co/blog/tomaarsen/attention-sinks
ποΈ
Stable Diffusion XL on TPUv5e
Note
Run SDXL with TPU with a in-depth technical explanation
liuhaotian/llava-v1.5-7b
Image-Text-to-Text
β’
Updated
β’
1.38M
β’
395
Note
A model that can do multimodal instruction following data
defog/sqlcoder2
Text Generation
β’
Updated
β’
317
β’
109
Note
Code models for the win! This is a 15B model that turns natural language to SQL
defog/sqlcoder-7b
Text Generation
β’
Updated
β’
367
β’
61
Note
And this is the 7B version of the above
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language
Models
Paper
β’
2309.12284
β’
Published
β’
19
Viewer
β’
Updated
β’
395k
β’
7.54k
β’
343
Note
A dataset of math questions for fine-tuning
π₯
AI Meme Generator
Note
Generate memes with IDEFICS, the multimodal model