--- license: apache-2.0 language: - en library_name: transformers tags: - prune - notus7b - Arcee base_model: argilla/notus-7b-v1 pipeline_tag: text-generation new_version: AINovice2005/LeEmpereur-final --- # Model Name: - LeEmpereur_70 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e8ea3892d9db9a93580fe3/lc5gftKyL60zY5JXq6fD-.png) # Model Description The pruning was performed using the PruneMe library from Arcee.ai, significantly reducing the model's size. The exact pruning strategy applied involves reducing the number of parameters by approximately 70%. ## Configuration: The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: argilla/notus-7b-v1 layer_range: [0, 1] - sources: - model: argilla/notus-7b-v1 layer_range: [2,10] merge_method: passthrough dtype: bfloat16 ``` 𝐑𝐞𝐬𝐮𝐥𝐭𝐬: Firstly, the ideal number of parameters to be pruned should be much lower in future iterations.Secondly, sizeable amount of finetuning should be done if model parameters are reduced to a greater extent. 𝐍𝐨𝐭𝐞: This model is made with the intention to be used for fine-tuning. It should not to be used for inference as is.