GGUF
English
mpt
code
Composer
MosaicML
llm-foundry
StreamingDatasets
custom_code
jackos commited on
Commit
ebfcad1
1 Parent(s): 4611a36

Add gguf models

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.gguf filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,123 @@
1
  ---
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: apache-2.0
5
+ tags:
6
+ - code
7
+ model_name: Replit Code V-1.5 3B
8
+ base_model: replit/replit-code-v1_5-3b
9
+ inference: false
10
+ model_creator: Replit
11
+ quantized_by: tzhenghao
12
+ datasets:
13
+ - bigcode/the-stack-dedup
14
+ - togethercomputer/RedPajama-Data-1T
15
+ tags:
16
+ - code
17
+ - Composer
18
+ - MosaicML
19
+ - llm-foundry
20
+ - StreamingDatasets
21
  ---
22
+
23
+
24
+ # Replit Code V-1.5 3B - GGUF
25
+
26
+ - Model creator: [Replit](https://huggingface.co/replit)
27
+ - Original model: [Replit Code V-1.5 3B](https://huggingface.co/replit/replit-code-v1_5-3b)
28
+ - GGUF models quantized by: [tzhenghao](https://huggingface.co/tzhenghao)
29
+
30
+ <!-- description start -->
31
+ ## Description
32
+
33
+ This repo contains GGUF format model files for [Replit Code V-1.5 3B](https://huggingface.co/replit/replit-code-v1_5-3b).
34
+
35
+ <!-- description end -->
36
+
37
+ <!-- original-model-card start -->
38
+ ## Model Description
39
+
40
+ Replit Code v1.5 is a 3.3B parameter Causal Language Model focused on **Code Completion**.
41
+
42
+ The model is trained in `bfloat16` on 1T tokens of code (~200B tokens over 5 epochs, including linear cooldown) for 30 programming languages from a subset of permissively licensed code from Bigcode's [Stack Dedup dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup), a filtered natural language sample from Markdown and reStructuredText subsets from the same Stack Dedup dataset, and a dev-oriented sample from [RedPajama's StackExchange dataset](https://github.com/togethercomputer/RedPajama-Data) sourced from the [Stack Exchange Data Dump by Stack Exchange Inc](https://archive.org/details/stackexchange).
43
+
44
+ The 30 programming languages are:
45
+ ```
46
+ Java, JavaScript, C, PHP, Python, C++, C#, TypeScript, Go, CSS, HTML, Rust, Ruby, Swift, Scala, Shell, Lua, Perl, Haskell, JSX, Julia, Common Lisp, OCaml, Solidity, Scheme, R, Zig, SQL, Racket, D
47
+ ```
48
+
49
+ The context size of the model is 4096 tokens. We use the GPTNeoX tokenizer with a custom trained and optimized vocabulary of 32768 tokens. This custom vocabulary led to single-digit % points on compression while maintaining or improving coverage on our training corpus.
50
+
51
+ The model has been trained on the [MosaicML](https://www.mosaicml.com/) platform on 128 H100-80GB GPUs using their [LLM Foundry](https://github.com/mosaicml/llm-foundry) and [Composer](https://github.com/mosaicml/composer) training library built on top of PyTorch.
52
+
53
+ ## Dependencies
54
+ You will need to install the latest versions of the following dependencies:
55
+ ```
56
+ einops
57
+ torch
58
+ transformers
59
+ ```
60
+
61
+ ## How to Use
62
+
63
+ ### Generation
64
+
65
+ You can generate code using the `transformers` library as follows:
66
+
67
+ ```python
68
+ from transformers import AutoModelForCausalLM, AutoTokenizer
69
+
70
+ tokenizer = AutoTokenizer.from_pretrained('replit/replit-code-v1_5-3b', trust_remote_code=True)
71
+ model = AutoModelForCausalLM.from_pretrained('replit/replit-code-v1_5-3b', trust_remote_code=True)
72
+
73
+ x = tokenizer.encode('def fibonacci(n): ', return_tensors='pt')
74
+ y = model.generate(x, max_length=100, do_sample=True, top_p=0.95, top_k=4, temperature=0.2, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
75
+
76
+ # decoding
77
+ generated_code = tokenizer.decode(y[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
78
+ print(generated_code)
79
+ ```
80
+
81
+ Experiment with different decoding methods and parameters to get the best results for your use case.
82
+
83
+ ### Using Triton Implementation of Flash Attention
84
+
85
+ ```python
86
+ import torch
87
+ from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig
88
+
89
+ config = AutoConfig.from_pretrained(
90
+ "replit/replit-code-v1_5-3b",
91
+ trust_remote_code=True
92
+ )
93
+ config.attn_config['attn_impl'] = 'triton'
94
+
95
+ # load model
96
+ tokenizer = AutoTokenizer.from_pretrained('replit/replit-code-v1_5-3b', trust_remote_code=True)
97
+ model = AutoModelForCausalLM.from_pretrained('replit/replit-code-v1_5-3b', config=config, trust_remote_code=True)
98
+ model.to(device='cuda:0', dtype=torch.bfloat16)
99
+
100
+ # forward pass
101
+ x = tokenizer.encode('def fibonacci(n): ', return_tensors='pt').to(device='cuda:0')
102
+ x = x.to(device='cuda:0')
103
+ y = model.generate(x, max_length=100, do_sample=True, top_p=0.95, top_k=4, temperature=0.2, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
104
+
105
+
106
+ # decoding
107
+ generated_code = tokenizer.decode(y[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
108
+ print(generated_code)
109
+ ```
110
+
111
+ Experiment with different decoding methods and parameters to get the best results for your use case. We recommend experimenting with `temperature` and `reptition_penalty`for optimal performance on your use case!
112
+
113
+ ## Intended Use
114
+
115
+ Replit intends this model be used by anyone as a foundational model for application-specific fine-tuning without strict limitations on commercial use.
116
+
117
+ The model is trained specifically for code completion tasks.
118
+
119
+
120
+ ## Limitations
121
+ The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing and toxicity and profanity filters, and such content may be reflected in model generated text. We recommend that users exercise reasonable caution when using in production systems. Do not use for any applications that may cause harm or distress to individuals or groups.
122
+
123
+ <!-- original-model-card end -->
replit-code-v1_5-3b-bf16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9908fff2007203760a38cc00b0d01dc41061f4abd40e815c67f14b23ccb74b6e
3
+ size 6645694784
replit-code-v1_5-3b-f32.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f1031d874bc24baf0d764ce34c0f6b4f3843c9fcbe613b96bee7cb26ca5ef95
3
+ size 13289472320