prithivMLmods
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,10 @@ tags:
|
|
18 |
|
19 |
# **GWQ2b - Gemma with Questions2b**
|
20 |
|
|
|
|
|
|
|
|
|
21 |
GWQ2b is a family of lightweight, state-of-the-art open models from Google, built using the same research and technology employed to create the Gemini models. These models are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained and instruction-tuned variants. GWQ2b models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. GWQ2b is fine-tuned on the Chain of Continuous Thought Synthetic Dataset, built upon the Gemma2forCasualLM architecture.
|
22 |
|
23 |
# **Running GWQ2b Demo**
|
|
|
18 |
|
19 |
# **GWQ2b - Gemma with Questions2b**
|
20 |
|
21 |
+
<a target="_blank" href="https://huggingface.co/spaces/prithivMLmods/GWQ-2B">
|
22 |
+
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
|
23 |
+
</a>
|
24 |
+
|
25 |
GWQ2b is a family of lightweight, state-of-the-art open models from Google, built using the same research and technology employed to create the Gemini models. These models are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained and instruction-tuned variants. GWQ2b models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. GWQ2b is fine-tuned on the Chain of Continuous Thought Synthetic Dataset, built upon the Gemma2forCasualLM architecture.
|
26 |
|
27 |
# **Running GWQ2b Demo**
|