File size: 1,266 Bytes
29abf87
 
 
 
 
 
 
7480820
8e2e62f
24b0204
2ec1d2b
b4e9d18
8e2e62f
 
06f2013
 
 
8e2e62f
b64f148
 
f377794
 
06f2013
 
 
b9a4aa5
06f2013
8e2e62f
 
 
 
 
 
 
 
 
 
 
 
06f2013
ff5388b
06f2013
7904d7c
06f2013
7904d7c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- prune
- notus7b
- Arcee
base_model: argilla/notus-7b-v1
pipeline_tag: text-generation
new_version: AINovice2005/LeEmpereur-final
---

# Model Name:

- LeEmpereur_70

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e8ea3892d9db9a93580fe3/lc5gftKyL60zY5JXq6fD-.png)

# Model Description

The pruning was performed using the PruneMe library from Arcee.ai, significantly reducing the model's size.
The exact pruning strategy applied involves reducing the number of parameters by approximately 70%.


## Configuration:
The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
      - model: argilla/notus-7b-v1
        layer_range: [0, 1]
  - sources:
      - model: argilla/notus-7b-v1
        layer_range: [2,10]
            
merge_method: passthrough
dtype: bfloat16
```

๐‘๐ž๐ฌ๐ฎ๐ฅ๐ญ๐ฌ: Firstly, the ideal number of parameters to be pruned should be much lower in future iterations.Secondly, sizeable amount of finetuning should be done if model parameters are reduced to a greater extent.

๐๐จ๐ญ๐ž: This model is made with the intention to be used for fine-tuning. It should not to be used for inference as is.