prithivMLmods commited on
Commit
1d2a808
·
verified ·
1 Parent(s): 6054aca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -4
README.md CHANGED
@@ -13,13 +13,10 @@ tags:
13
  - text-generation-inference
14
  - gwq2b
15
  ---
16
-
17
  <a target="_blank" href="https://huggingface.co/spaces/prithivMLmods/GWQ-2B">
18
  <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="gwq2b.hf.space"/>
19
  </a>
20
-
21
- ![gwq2.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/Ayc6YKE6FKYKb8Mible4z.png)
22
-
23
  # **GWQ2b - Gemma with Questions2b**
24
 
25
  GWQ2b is a family of lightweight, state-of-the-art open models from Google, built using the same research and technology employed to create the Gemini models. These models are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained and instruction-tuned variants. GWQ2b models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. GWQ2b is fine-tuned on the Chain of Continuous Thought Synthetic Dataset, built upon the Gemma2forCasualLM architecture.
 
13
  - text-generation-inference
14
  - gwq2b
15
  ---
16
+ ![gwq2.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/Ayc6YKE6FKYKb8Mible4z.png)
17
  <a target="_blank" href="https://huggingface.co/spaces/prithivMLmods/GWQ-2B">
18
  <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="gwq2b.hf.space"/>
19
  </a>
 
 
 
20
  # **GWQ2b - Gemma with Questions2b**
21
 
22
  GWQ2b is a family of lightweight, state-of-the-art open models from Google, built using the same research and technology employed to create the Gemini models. These models are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained and instruction-tuned variants. GWQ2b models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. GWQ2b is fine-tuned on the Chain of Continuous Thought Synthetic Dataset, built upon the Gemma2forCasualLM architecture.